Tuesday, December 7, 2021

Kubernetes Basics

Kubernetes Kubeless
Where you get pre-installed kubernetes cluster
Like Katacoda

kubectl run web-server --image=nginx

kubectl get pods

kubectl get pods -o wide

kubectl describe pod web-server

in each pod 2 container create at least.
pause is 1st pod it will create 1st
Pause
network namespace
pod’s IP address
set up the network namespace for all other containers that join that pod

kubectl delete pod web-server

how to check node desription
kubectl describe node node_name

why cant install pod on master node by default
because its have taints - master:NoSchedule


How to remove taints NoSchedule from master
kubectl taint nodes -all node-role.kubernetes.io/master-
 

How to delete all running pods
kubectl delete pod --all

How to remove pod from file method
check detail by below cmd
kubectl expain pod


apiVersion: v1
kind: Pod
metadata:
   name: web-server
spec:
   containers:
     - name: web-server
       image: nginx

kubectl apply -f pod.yml
kubectl delete -f pod.yml

how to enter inside a pod container
kubectl exec -it pod_name /bin/bash

if multiple container running in 1 pod
describe pod 1st
kubectl describe pod pod_name
and find container name and enter with -c
kubectl exec -it pod_name -c container_name /bin/bash

if want to enter inside a pod just after creation
kubectl run -it dev-server --image=ubuntu:14.04


Note: pod cant create itself until we dont define ReplicationController, Replica set, deployment set

Kubernetes command
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands


 

Kubernetes replication type
Kubernetes Pod Controller

In ReplicationController
We can define only 1 label.
it support equity base selector
its have rolling update feature
But its have small down time while rolling update.
And you cant do rollback.

In Replica set
we can define multiple label
it support equity base selector and set base selector
its dont rolling update feature

equity base - For example, if we provide the following selectors:
env = prod
tier != frontend
Here, we’ll select all resources with key equal to environment and value equal to production. The latter selects all resources with key equal to tier and value distinct from frontend, and all resources with no labels with the tier key.

One usage scenario for equality-based label requirement is for Pods to specify node selection criteria.

set base selector - For example, if we provide the following selectors:
env in (prod, qa)
tier notin (frontend, backend)
partition
Here, in the first selector will selects all resources with key equal to environment and value equal to production or qa.
The second example selects all resources with key equal to tier and values other than frontend and backend, and all resources with no labels with the tier key. The third example selects all resources including a label with key partition; no values are checked.

The set-based label selector is a general form of equality since environment=production is equivalent to environment in (production). Similarly for != and notin.

What is rolling update
when you updating image version on pods, you can use rolling update

In deployment
we create deployment service,
Its already have replica set feature.
So now we can do rolling update in this method.
Its DONT have downtime, its create another replica set while rolling update.
and create 1 pod in another replica set, if success, then remove from old replica set, and do it repeatedly.
and when all pod removed from old replica it will not remove that replica, it will change value 0 for that, so traffic will not goes there.
In case we have to do rollback, we can do again we start move pods to old replica set and it will change old replica set value to 1.

Note: Most demanding pod controller is deployment

https://www.mirantis.com/blog/kubernetes-replication-controller-replica-set-and-deployments-understanding-replication-options/

Use case of replication controller
Uselly when delete running pod, and if another node dont create means replication controller not running.

how to check if pod running in replication controller or not?
kubectl describe pod pod_name



Replication controller

kubectl get rc

kubectl explain rc
 kubectl explain rc --recursive | less

apiVersion: v1
kind: ReplicationController
metadata:
  name: cloud
spec:
  replicas: 3
  selector:
    team: dev
  template:
    metadata:
      name: cloud
      labels:
        team: dev
    spec:
      containers:
        - name: cloud
      image: nginx:1.7.1
          ports:
          - containerPort: 80   

how to check pod and rc in one window view.


to check labels also in describe
kubectl get pod,rc --show-labels

Describe rc
kubectl describe rc cloud


Lab
remove a label from existing pod,
and set another label on it.
then create a replica set with same name, e.g 3 replica set
it will take 1 existing and 2 new create.

create pod
kubectl run -it web --image=nginx

remove label
#kubectl label pod pod_name label_name-
kubectl label pod web run-

set a label on a pod
#kubectl label pod pod_name label_name
kubectl label pod web team=prod

now create another replication controller with 3 pod with same label team=prod
it will take 1 existing container and 2 new creates.


In case you want to delete rc only not pod
kubectl delete rc --cascade=false rc_name
kubectl delete rc --cascade=false cloud

Rolling update in Replication controller



Backend process
create 1 rc- with 3 pod nginx - 1.7.1
do roling update nginx - 1.9.1
it will create another rc
and create a pod in new rc, and delete pod from old rc one by one.
once its create all pod in new rc, both rc will delete once same time.
and it will create new rc with old rc name with new nginx pods
so service will not confuse about rc name.
drawback: little bit downtime is there.

Note: rolling update feature of replication controller doesn't work on newer Kubernetes version, you need to install some extra plugin if you want to run.

before 1.14 it was working.

cmd to do rolling update in replication controller
kubectl rolling-update old_rc  --update-period-10s -f rc.yml
kubectl rolling-update dev-team --update-period-10s -f rc.yml

manual update image by cmd.
kubectl rolling-update dev-team --image=nginx:1.7.1

Kubernetes ReplicaSet
You can select multiple label.
Rolling update is not available.

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: cloud
spec:
  replicas: 3
  selector:
    matchExpressions:
     - key: team
       operator: In
       values:
         - dev
         - prod
         - test
     - key: team
       operator: NotIn
       values:
         - team
    template:
      metadata:
        name: cloud
        labels:
          team: dev
      spec:
        containers:
          - name: cloud
        image: nginx:1.7.1
            ports:
              - containerPort: 80

List replica set
kubectl get rs

describe replica set
kubectl describe rs replicaSet_name
kubectl describe rs cloud



Note: whenever you create a pod for testing purpose dont put any label on it,
if that label match by someone else define, it could be a part of that.

Sometime interview question your pod got deleted automattically, what would be the reason?
Might be label was set on that, and same label was selected by someone for replicas so it chosen over there.

Kubernetes Deployment

Deployment have feature of replicset and replication controller both.

1.We can define multiple label as selector.
2.and we can define rolling update also.
3.deployment create on replica set

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.2
        ports:
        - containerPort: 80

check deployment
kubectl get deployments

check pod, deployment,rs
kubectl get deployment,rs,pod


kubectl rollout history deployment_name
kubectl rollout history deployment.apps/nginx-deployment

if want to update the deployment make changes in yml and apply again

if you want to change deployment image directly by cmd
kubectl set image deployment_name image_name --record=true
kubectl set image deployment.app/nginx-deployment nginx=nginx:1.15.1 --record=true

edit deployment and update image it will update the image automatically
kubectl edit deployment_name
kubectl edit deployments.apps

Revision check, deployment rollback history check
kubectl rollout history deployment nginx-deployment --revision=5

rollout, rollback
kubectl rollout undo deployment_name
kubectl rollout undo deployment.app/nginx-deployment
Note if you run without revision number it will rollback to last revision

revision to particular number
kubectl rollout undo deployment.app/nginx-deployment --to-revision=2

Its risky to keep rollout active, we can put it as pause, so any 3rd person unable to do rollout
kubectl rollout pause deployment_name
kubectl rollout pause deployment.apps/nginx-deployment

kubectl rollout resume deployment_name
kubectl rollout resume deployment.apps/nginx-deployment



Kubernetes Deployment Strategies

1.Rolling deployment
2.Recreate
3.Canary Deployment (Blue & green deployent or red & black deployment)


Rolling Deployment
MaxSurge 2 (number/Pecentage of pod create 1st, then remove 2 old pod)
MaxUnavailable 2 (number/Pecentage of pod remove 1st, then create new pod)

if dont define maxsurge/MaxUnavailable it will do 1 by 1.

Recreate
It will remove all existing pod then recreate new one.
speed is fast
drawback: downtime

Canary Deployment
Here ingress controller working as a LB.
you can define, where need to forward traffic
you can define active passive
you can define weight also, load like 50% traffic on left side 50% traffic on right side.
you can use recreate within canary deployment that is also possible
Its also work as DR
we can add 4 node also in canary deployment, 2 node with green 2 with blue deployment.


 

Rolling update deployment Strategies

apiVersion: apps/v1
kind: Deployment
metadata:
  name: cloud
  labels:
    app: nginx
spec:
  replicas: 4
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0

    type: RollingUpdate
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.1
        ports:
        - containerPort: 80

Recreate Strategies

apiVersion: apps/v1
kind: Deployment
metadata:
  name: cloud
  labels:
    app: nginx
spec:
  replicas: 4
  strategy:
    rollingUpdate:
    type: Recreate

  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.1
        ports:
        - containerPort: 80

Its fast.
But 100% downtime.
Use case in QA env. or use in blue gree deployment QA env.

to check yml format for deployment
kubectl explain deployment --recursive | less

Kubernetes service

by default we dont use pod ip for mapping or for network comm.
so we use service on the top of deployment.
service also have ip, and it will map the all pod with it
all pod endpoint connected with service.


 


Service Type


1.ClusterIP
2.NodePort
3.LoadBalancer

ClusterIP
1st Use case, when we need to map pod with another pod,
e.g database ip with app, we use use service ip of web app to map with dabase ip

Use of ClusterIP
when we just want to do internal comm. only
All deployment comm. internally with each other using service ip

 


NodePort

when you want to access your deployment from outside
eg, you want web app access is accessible from outside, you can use  NodePort
and database you can keep as ClusterIP.

Note: you can create your deployment service once, if you want to change you need to delete it 1st and create again.
If you selecting your service type is NodePort, it will contain ClusterIP type also with in it.
whenever you create NodePort it will add random port with node ip, and add a service ip
eg service_ip:33333 10.200.11.10:33333
So now from outside customer hit any node ip with port, it will redirect to service ip.

LoadBalancer
When you select LoadBalancer, it contain ClusterIP, NodePort in it.

User Case,
if you dont want to hit on node IP, you want traffic from outside comes on 1 ip, you can use ServiceType LoadBalancer
 

Flow from LB to pod
LB(LB register Node ip) forward to Node ip on port
NodePort(Node Port have Service IP) forward to Service IP
Service IP mapped with pod, it foward to pods

Kubernetes LoadBalancer have its own Ingress LoadBalancer in bare metal
If your cluster not running in AWS, Azure cloud and you dont have your own Loadbalancer.
Kubernetes bare metal LB install in a pod and it work as LB.
In Ingress you define your public ip poll.


If you running Kubernetes in local Infra you can use other LoadBalancer like F5

Front end port - service ip - 80
backend port - Pod ip - 80

Service (cluster Ip) Lab
create 2 yml file 1 for web-server, 1 for db server

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-server
  labels:
    app: web-server
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: web-server
  template:
    metadata:
      labels:
        app: web-server
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.1
        ports:
        - containerPort: 80

deploy both. now check ip for both.
Enter in web server pod.
kubectl exec -it db_server /bin/bash
curl db_ip you should able to access it.

Now delete db-server pod, curl again db_ip, you are not able to ping now.

Now create cluster service
kubectl expose deployment deployment_name --type=ClusterIP --port server_port --targe-port=service_backend_port
kubectl expose deployment db-server --type=ClusterIP --port 80 --target-port=80

Now go inside web pod again and hit service ip of db-server and you are able to access pod,
Delete db pod, try again, the new db port mapped automatically
and you are able to access db-server
service mapped all endpoint with service ip

kubectl describe service db-server
Name:              db-server
Namespace:         default
Labels:            app=db-server
Annotations:       <none>
Selector:          app=db-server
Type:              ClusterIP
IP:                10.98.160.71
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.1.6:80,10.244.1.7:80,10.244.1.8:80 + 1 more...
Session Affinity:  None
Events:            <none>

service sort form is svc
kubectl describe svc db-server

By default 1 service always running in your cluster, and even you delete it, it will recreate itself.
kubectl get all -o wide
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE   SELECTOR
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   88m   <none>

In master node all components  also running in pod, like api-server, scheduler, database.
so all these component communicate with each other, they use same endpoint,
And they use same ip, and its master node ip
e.g
Endpoints:         172.17.0.19:6443

Service (NodePort) Lab
create 2 yml file 1 for web-server, 1 for db server

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-server
  labels:
    app: web-server
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: web-server
  template:
    metadata:
      labels:
        app: web-server
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.1
        ports:
        - containerPort: 80

deploy both.
now check nodeport ip for both.
now curl node ip with port you should not able to access pod content.


Now create NodePort service
kubectl expose deployment deployment_name --type=ClusterIP --port server_port --targe-port=service_backend_port
kubectl expose deployment db-server --type=NodePort

Now check again node ip with port you should able to access pod content.

$ kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
db-server    NodePort    10.97.196.4    <none>        80:31657/TCP   27m
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        57m
web-server   NodePort    10.110.49.53   <none>        80:31346/TCP   29m
controlplane $ kubectl describe svc
db-server   kubernetes  web-server  

$ kubectl describe svc web-server
Name:                     web-server
Namespace:                default
Labels:                   app=web-server
Annotations:              <none>
Selector:                 app=web-server
Type:                     NodePort
IP:                       10.110.49.53
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  31346/TCP
Endpoints:                10.244.1.3:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

$ kubectl get nodes -o wide
NAME           STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION       CONTAINER-RUNTIME
controlplane   Ready    master   59m   v1.18.0   172.17.0.10   <none>        Ubuntu 18.04.5 LTS   4.15.0-122-generic   docker://19.3.13
node01         Ready    <none>   58m   v1.18.0   172.17.0.11   <none>        Ubuntu 18.04.5 LTS   4.15.0-122-generic   docker://19.3.13


$curl 172.17.0.10:31657

$curl 172.17.0.11:31657

$curl 172.17.0.10:31346

$curl 172.17.0.11:31346

 Note: all node ip bind with port number, and you can access web access content or web access with port number.

Drawback of NodePort
need to use Port Number with the node IP.

Service (LoadBalancer) Lab

Its use when you bind your node ip port with 1 public ip, and you access from outside.
Metal LB, is Kubernetes own Loadbalancer, its also create in Pod
And you define a pool of public ip from where it pick one ip.
If you using AWS, Azure, its use cloud Load Balancer

Traffic Flow
Load Balancer IP to Node IP port
Node Ip port to Service
Service to Pod Endpoint

LoadBalancer - Bind all node ips with port
NodePort - all node bind with a particular port, and its point to the service ip
Service IP - Bind all Pod Endpoint in deployment

Lab

create 1 yml file 1 for web-server.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-server
  labels:
    app: web-server
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: web-server
  template:
    metadata:
      labels:
        app: web-server
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.1
        ports:
        - containerPort: 80

deploy it.

Now create LoadBalancer service
kubectl expose deployment deployment_name --type=ClusterIP --port server_port --targe-port=service_backend_port
kubectl expose deployment db-server --type=LoadBalancer

Now check if you got LB ip, if you dont receive ip, or its pending, means you did not get ip from LB pool

metallb setup

Installation by manifest
To install MetalLB, apply the manifest:
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.11.0/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.11.0/manifests/metallb.yaml

configuration
vim metallb.yml

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.1.240-192.168.1.250

deploy metallb.yml
check if you got IP now ?

curl LoadBalancer ip you should access web server application now

Before
controlplane $ kubectl get svc
NAME         TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP      10.96.0.1        <none>        443/TCP        41m
web-server   LoadBalancer   10.111.136.131   <pending>     80:31903/TCP   39s

After
$ kubectl get svc
NAME         TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)        AGE
kubernetes   ClusterIP      10.96.0.1        <none>          443/TCP        48m
web-server   LoadBalancer   10.111.136.131   192.168.1.240   80:31903/TCP   7m24s

$ kubectl describe svc web-server
Name:                     web-server
Namespace:                default
Labels:                   app=web-server
Annotations:              <none>
Selector:                 app=web-server
Type:                     LoadBalancer
IP:                       10.111.136.131
LoadBalancer Ingress:     192.168.1.240
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  31903/TCP
Endpoints:                10.244.1.3:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason        Age    From                Message
  ----    ------        ----   ----                -------
  Normal  IPAllocated   8m12s  metallb-controller  Assigned IP "192.168.1.240"
  Normal  nodeAssigned  8m12s  metallb-speaker     announcing from node "controlplane"


$ kubectl get nodes -o wide
NAME           STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION       CONTAINER-RUNTIME
controlplane   Ready    master   59m   v1.18.0   172.17.0.10   <none>        Ubuntu 18.04.5 LTS   4.15.0-122-generic   docker://19.3.13
node01         Ready    <none>   58m   v1.18.0   172.17.0.11   <none>        Ubuntu 18.04.5 LTS   4.15.0-122-generic   docker://19.3.13

$ curl 192.168.1.240
now you are able to access web-server content

check metallb pods

$ kubectl -n metallb-system get pod
NAME                          READY   STATUS    RESTARTS   AGE
controller-56c7546946-4ns6w   1/1     Running   0          19m
speaker-n6dv8                 1/1     Running   0          19m
speaker-xl5mm                 1/1     Running   0          19m

Kubernetes Init container

Init container
When we want a container should run before main container.
For perform some task, and complete those task, and after that it get terminate.
Then main container should run.

Scenerio in a company
you have requirment to deploy web-server.
you need to check before that db-server
like
health
if db server able to ping,
if db server is health, able to ping for 30 sec continually etc.

Scenerio
you have to deploy web-server.
the sourcecode is on git etc.
you need to wget and put it to volume.
this volume will mount with the web-server


Lab  
below yml file create 2 init container and 1 main container.
first it will wait for 2 init container create and find service and need to terminate.

vim init-container.yml

apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
  labels:
    app: myapp
spec:
  containers:
  - name: myapp-container
    image: busybox:1.28
    command: ['sh', '-c', 'echo The app is running! && sleep 3600']
  initContainers:
  - name: init-myservice
    image: busybox:1.28
    command: ['sh', '-c', "until nslookup myservice.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for myservice; sleep 2; done"]
  - name: init-mydb
    image: busybox:1.28
    command: ['sh', '-c', "until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done"]

you can check status
watch kubectl get all

kubectl describe pod myapp-pod

check logs
kubectl logs myapp-pod -c init-myservice
kubectl logs myapp-pod -c init-mydb

now create below service and watch kubectl get all,
2 init container get create now, and get terminate

vim service.yml

---
apiVersion: v1
kind: Service
metadata:
  name: myservice
spec:
  ports:
  - protocol: TCP
    port: 80
    targetPort: 9376
---
apiVersion: v1
kind: Service
metadata:
  name: mydb
spec:
  ports:
  - protocol: TCP
    port: 80
    targetPort: 9377


Lab

in below yml
we creates volume,
wget a code,
mount it in container
/usr/share/nginx/html

apiVersion: v1
kind: Pod
metadata:
  name: init-demo
spec:
  containers:
  - name: nginx
    image: nginx
    ports:
    - containerPort: 80
    volumeMounts:
    - name: workdir
      mountPath: /usr/share/nginx/html
  # These containers are run during pod initialization
  initContainers:
  - name: install
    image: busybox
    command:
    - wget
    - "-O"
    - "/work-dir/index.html"
    - http://info.cern.ch
    volumeMounts:
    - name: workdir
      mountPath: "/work-dir"
  dnsPolicy: Default
  volumes:
  - name: workdir
    emptyDir: {}

Kubernetes Multi-Container Pod

Min container in a Pod -2
Max upto size of infra.

1st pod container - Pause
Pause container -
1.stores Pod IP
2.create a namespace - eg. xyz and this namespace bind with Pod IP


can we create multiple service in one container ?
eg. database, web-server in 1 container
we can create, but we should not,
becz container based virtullization comes in market reason, we should run each service in a seprately container.

In case we run multiple service in one container, they use same library in shared mode like we use in old VM format, here no benifit of containerized based virtualization.

use case
3 tier architeture solution
web - db - app connect with each other with localhost:port

 

Side car
content create -
feed to apache container

content create container - creating and putting content in a volume, volume can be persistant shared or temp in container and apache container using same volume to display that content on web-app.
both contain sharing volume and n/w both

Proxy setup
when we dont want direct traffic on our apache web server,
we use proxy server, proxy port is 80, and we have change apache port to 8888
proxy server forwarding traffic to apache web server now.


 

Init container also a example
also an example of multi-container pod,
but it perform its task and get terminated.

Pod is also a itself localhost system, computer
in Pod we share n/w and volume.
Pod itself treat as localhost 

If you dont mention a active process in a multi container Pod
at a time only 1 container will keep running in the Pod

Lab

vi example.yml

apiVersion: v1
kind: Pod
metadata:
  name: cloud
spec:
  containers:
   - name: 1st
     image: nginx

   - name: 2nd
     image: nginx

describe pod, you will see 1 pod got terminated.

Now define a process in above yml

vi example.yml

apiVersion: v1
kind: Pod
metadata:
  name: cloud
spec:
  containers:
   - name: 1st
     image: nginx
     args: ["sleep", "3600"]

   - name: 2nd
     image: nginx

Now describe Pod, you will see both container should be running

In case you want to go to inside of any container in your Pod
kubectl exec -it pod_name /bin/bash
it will enter inside the default container

kubectl exec -it pod_name -c container_name /bin/bash

kubectl exec -it cloud /bin/bash
kubectl exec -it cloud -c 2nd /bin/bash

check inside the container
hostname -i
both container ip should be same

How multiple container comm. with each other?
its like local env. they comm. by port

lab
check by netstat port opened currently
netstat -tunlp  

open port manually
netcat -l -p 3306

telnet on this port from another container, you will see the comm.


Side Car

In below yml
creating a volume
creating html container and mounting it to
/usr/share/nginx/html

creating another debian container mounting same volume here.
updating index.html with date cmd every sec.

vim side_car.yml

apiVersion: v1
kind: Pod
metadata:
  name: mc1
spec:
  volumes:
  - name: html
    emptyDir: {}
  containers:
  - name: 1st
    image: nginx
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html
  - name: 2nd
    image: debian
    volumeMounts:
    - name: html
      mountPath: /html
    command: ["/bin/sh", "-c"]
    args:
      - while true; do
          date >> /html/index.html;
          sleep 1;
        done

Kubernetes Storage   

Persistent Volume
local harddrive


local harddrive
if use local drive of worker node, it can is persistent.
But suppose pod delete creates again on another worker node,
here that local drive of worker node 1 is not mounted
you dont able to acess data.

Network Volume & Storage
NFS
SCSI
AWS S3
Azure
AWS EBS
Redhat Gluster



To check
go to kubernetes
persistent volumes

PV - Persistent Volume
PVC - Persistent Volume Claim



remove taint
kubectl taint nodes --all node-role.kubernetes.io/master-

Lab

create a network volume like nfs, ebs
yum install nfs* -y
mkdir -p /common/volume
chmod -R 777 /common/volume
vim /etc/exports
/common/volume *(no_root_squash,rw,sync)
:wq  

systemctl start nfs-server
systemctl enable nfs-server

firewall-cmd --permanent --zone=public --add-service-nfs
firewall-cmd --reload

expertfs -v

PV is like VG
PVC is like LVM

1.PVC you can attach with POD, if pod get deleted you will still get the data.
2.PVC can attach with multiple Pod at same time

You can keep snapshot of PVC
snapshot can be restore later
But you cant clone PVC 


kubectl get pv

kubectl get pvc

vim pv.yml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: reliance-volume
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: reliance-volume
  mountOptions:
    - hard
    - nfsvers=4.1
  nfs:
    path: /common/volume
    server: 172.31.27.104

kubectl apply -f pv.yml


accessModes:

ReadWriteOnce- the volume can be mounted as read-write by a single node. ReadWriteOnce access mode still can allow multiple pods to access the volume when the pods are running on the same node.
ReadOnlyMany- the volume can be mounted as read-only by many nodes.
ReadWriteMany- the volume can be mounted as read-write by many nodes.
ReadWriteOncePod- the volume can be mounted as read-write by a single Pod. Use ReadWriteOncePod access mode if you want to ensure that only one pod across whole cluster can read that PVC or write to it. This is only supported for CSI volumes and Kubernetes version 1.22+.

persistentVolumeReclaimPolicy

Recycle- if recycle, when PVC delete, you cant claim data, space will go back to PV,
eg. PV 10GB, PVC-5GB created, if delete PVC, PV again 10GB, cant claim data
Retain- If PVC delete, PVC volume did not go back to PV, and you can claim it later.
Delete -- associated storage asset such as AWS EBS, GCE PD, Azure Disk, or OpenStack Cinder volume is deleted

storageClassName
if you have multiple storage PVC, you can define name here.
If you dont define, it will select automattically

vim pvc.yml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: reliance-rpower
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 5Gi
  storageClassName: reliance-volume

kubectl apply -f pvc.yml

kubectl get pv
kubectl describe pv reliance-volume

kubectl get pvc
kubectl describe pvc pvc_name

Lab

vim wordpress.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: wordpress-mysql
  labels:
    app: wordpress
spec:
  selector:
    matchLabels:
      app: wordpress
      tier: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: wordpress
        tier: mysql
    spec:
      containers:
      - image: mysql:5.6
        name: mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: kubernetes@123
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: reliance-rpower

kubectl describe pods wordpress-mysql-efgw

go to database pod
show database
create database india;

now delete pod,
new pod will created automattically.
check again in the pod if database is there.

Namespace

If you create a pod
kubectl run --name webserver --image=nginx

use case
can we create same name pod in a same namespace?
another pod you cant create with the same name.
so namespace is the solution
 

use case
dev-team
test-team
prod-team
each dont want to see other team pods.
they want to see there team pods only

sol. if you create seprate cluster for each team.
but it will costly then, to setup a new cluster for each team.

Another sol. create diff name space for each team.


use case
Wipro received 3 project
project-1
project-2
project-3
req. wipro want isolated env. for each project.
so you can create 3 namespace for each project,
each namespace have isloated env. and you provide rights to engineer of each project accordingly.

$ kubectl get ns
NAME              STATUS   AGE
default           Active   42m
kube-node-lease   Active   42m
kube-public       Active   42m
kube-system       Active   42m

kube-system- all major components in kube-system namespace eg. api server, controller manager, kube proxy, network
default- all pod we create goes in default namespace

list pod with namespace
$ kubectl -n kube-system get pods

create namespace
$ kubectl create namespace project-1
$ kubectl create namespace project-2
$ kubectl create namespace project-3

Lab
create a pod
$ kubectl run dev-server --image=nginx
it will create a pod in default namespace

list pod of diff namespace eg. porject-1
$ kubectl -n project-1 get pods

create a pod in specific namespace
$ kubectl -n project-1 run dev-server --image=nginx

create pod with yml in specific namespace
apiVersion: v1
kind: Pod
metadata:
   name: project-1
   namespace: project-1
spec:
  containers:
    - name: project-1-container
      image: nginx

how its picking default name space when we dont mention any namespace,
How to change default name space

list context
$ kubectl config get-contexts

create context
$ kubectl config set-context project-1 --namespace=project-1 --user=kubernetes-admin --cluster=kubernetes
$ kubectl config set-context project-2 --namespace=project-2 --user=kubernetes-admin --cluster=kubernetes
$ kubectl config set-context project-3 --namespace=project-3 --user=kubernetes-admin --cluster=kubernetes

switch context
$ kubectl config use-context project-1

$ kubectl config get-contexts

Kubernetes Quota


 


In above diagram
we have 4 node, 4th node, resource 100 used, so we dont want to add any more pod over there,
we want pod to be schedule on worker-node-3, and ignore worker-node-4

Sol. we put taint -NoSchedule on worker-node-4

another use case
if want to upgrade a patch on any node, we can put a taint on the node at that time.

By default we have taint on master node taint -NoSchedule

Lab1

$ kubectl run web-server --image=nginx --replicas=12

check pods node
$ kubectl get pods -o wide
all pods are on worker node, -NoSchedule on master node

pick taint name from describe cmd
kubectl describe node master | grep -i taint

remove taint
kubectl taint nodes --all node-role.kubernetes.io/master-

Now create pods again
$ kubectl run web-server --image=nginx --replicas=12
Now pod get scheduled on master node also

Lab2

Put taint on worker node
$ kubectl taint nodes node01 key=value:NoSchedule

how create pods again
$ kubectl run web-server --image=nginx --replicas=12
now all pods got created on master node

remove taint
$ kubectl taint nodes node01 key:NoSchedule-

Now create pods again
$ kubectl run web-server --image=nginx --replicas=12
Now pods create on both master and worker node

taint key value
NoSchedule
NoExecute

Lab 3

Put taint on master and worker node both,
$ kubectl taint nodes master key=value1:NoSchedule
$ kubectl taint nodes node01 key=value2:NoSchedule

Now you  will see all pods in pending
you can check the reason
$ kubectl describe pods web-server

Now remove taint
$ kubectl taint nodes master key:NoSchedule-
$ kubectl taint nodes node01 key:NoSchedule-

Now those pending pods will create

Kubernetes Labels


 


Scenerio
in above diag, req. my all pods should go on worker-node-1 only
we put label on worker=node1

Normally all pods spread in all nodes.
Scenerio
We want project 1 pods should go in worker-1, worker-2 node
project 2 pods should go in worker-3, worker-4
project 3 pods should go in worker-5, worker-6
we can bind namspace with the node with the help of label

    
Anoth. Scenerio
you got a req. in your cluster you have hdd, sdd disk,
but you need to setup, all pods should go in sdd disk only.
you will make label on sdd disk worker node

Lab

go to master node remove taint
$ kubectl taint nodes --all node-role.kubernetes .io/master-

kubectl get nodes
kubectl run web-server --image=nginx --replicas=12
some pod on master some pod create on worker node-1

create label
kubectl label node controlplane node=master
kubectl label node node01 node=worker

list label of node
kubectl describe node master | grep -i label

create 2 yml, and apply

vim dev-server.yml

apiVersion: v1
kind: Pod
metadata:
    name: dev-server
spec:
    containers:
        - image: nginx
          name: dev-server
    nodeSelector:
         node: "worker"

vim prod-server.yml

apiVersion: v1
kind: Pod
metadata:
    name: prod-server
spec:
    containers:
        - image: nginx
          name: prod-server
    nodeSelector:
         node: "master"

Now you will see dev-server goes to worker node
prod-server goes to worker node

Lab2

create namespace
kubectl create namespace dev-team
kubectl create namespace prod-team

create pods in dev-team namespace
kubectl run -n dev-team dev-server --image=nginx --replicas=12
now list pods
kubectl -n dev-team get pods -o wide
you will see your pods going in master and node1 both.

bind your namespace with node
cd /etc/kubernetes/manifests/
ll

vi kube-apiserver.yml
--enable-admission-plugins=NodeRestriction,PodNodeSelector
:wq!

kubectl get namespace

go to namespace
dev-team
prod-team

kubectl edit namespace dev-team
...
..
 name:  dev-team
 annotations:
   scheduler.alpha.kubernetes.io/node-selector: "node=worker"
:wq!

do the same in another namespace

kubectl edit namespace dev-team
...
..
 name: prod-team
 annotations:
   scheduler.alpha.kubernetes.io/node-selector: "node=master"
:wq!

now create pod
kubectl run -n dev-team dev-server --image=nginx --replicas=12
kubectl run -n prod-team prod-server --image=nginx --replicas=12

kubectl -n dev-team get pods -o wide

kubectl -n prod-team get pods -o wide

now you will see only pods in specific node,
becz you bind namespace with node

Remove labels
kubectl label node01 node-
kubectl label master node-

remove from namespace
kubectl edit namespace dev-team
 
Remove from file
cd /etc/kubernetes/manifests/
vi kube-apiserver.yml
--enable-admission-plugins=NodeRestriction,PodNodeSelector
:wq!

Kubernetes Pod limits
 


 

By default Pod limit in a node - 110

cd /var/lib/kubelet/
vi config.yml
.
..
maxPods:110
:wq!

systemctl restart kubelet

kubectl get nodes

kubectl describe node

cd /var/lib/kubelet/
vi config.yml
.
..
maxPods:110
:wq!

check limit in master node
kubectl describe node master
...
maxPods:20
..

check pods in all namespace
kubectl -n namspace get pods

9 more pods we can create in masternode

Lab1

create 9 pods
kubectl run web-server --image=nginx --replicas=9
but now 9 pods goes in diff. node.

so make sure these all 9 pods need to create in master node
sol. we create taint on node01, so all 9 pod now will go in master node.

create taint on node01
kubectl taint nodes node01 key=value:NoSchedule
 
kubectl run web-server --image=nginx --replicas=9
now all pod created in maser node

kubectl get pods -o wide

create another deployment
kubectl run web-server1 --image=nginx --replicas=10
now these 10 pods will go in pending state
can check the logs
kubectl describe pods dev-server

if you remove taint from the node01, or if you increase the pod limit on master node
these 10 pods go accordingly

Kubernetes Resource Quota

Use Case

We have multiple project-1,2,3
we have diff namspace for each.
but we are not sure which project use more resource
or sometime we have issue related to the resource.

Sol.
without resource quota (only use namespace)
project-1 attach with (node1,node2) namespace
project-2 attach with (node3,node4) namespace
project-3 attach with (node5,node6) namespace
But again in future issue comes in project 1 again 3 dept. comes
1.dev
2.test
3.prod
but we are not sure if dev use all 200GB resource, again we need to define resource quota



Sol. we can create resource quota for each project
Its create on the namespace


Lab
create 3 namespace project-1,2,3
kubectl create namespace project-1
kubectl create namespace project-2
kubectl create namespace project-3


to check namespace quota
kubectl -n project-1 get quota
kubectl -n project-2 get quota

create quota on namespace
vim project-1-quota.yml

apiVersion: v1
kind: ResourceQuota
metadata:
  name: project-1-quota
  namespace: project-1
spec:
  hard:
    pods: "2"
    configmaps: "10"
    secrets: "10"
    services: "10"
    persistentvolumeclaims: "50"

vim project-2-quota.yml

apiVersion: v1
kind: ResourceQuota
metadata:
  name: project-2-quota
  namespace: project-2
spec:
  hard:
    pods: "200"
    secrets: "300"
    persistentvolumeclaims: "50"

create quota
kubectl -n project-1 apply -f project-1-quota.yml
kubectl -n project-2 apply -f project-2-quota.yml
 
get quota description
kubectl -n project-1 describe quota
kubectl -n project-2 describe quota

create pod in project-1 ns
kubectl run -n project-1 web-server --image=nginx
kubectl run -n project-1 web-server1 --image=nginx
kubectl run -n project-1 web-server2 --image=nginx
when you create 3rd deployment it will create deployment, replicaset, but unable to create pod

Lab2

edit
vim project-2-quota.yml

apiVersion: v1
kind: ResourceQuota
metadata:
  name: project-2-quota
  namespace: project-2
spec:
  hard:
    pods: "10"
    configmaps: "10"
    secrets: "10"
    services: "10"
    persistentvolumeclaims: "50"
    limits.memory: "800Mi"
    limits.cpu: "10"


apply
kubectl -n project-1 apply -f project-1-quota.yml

vi pod.yml

apiVersion: v1
kind: Pod
metadata:
  name: web-server
  namespace: project-1
spec:
  containers:
    - image: nginx
      name: web-server

if you specified in the ResourceQuota
with cpu, memory then need to define in pod yml also otherwise you will get error.


edit again now with cpu, memory parameter

vi pod.yml

apiVersion: v1
kind: Pod
metadata:
  name: web-server
  namespace: project-2
spec:
  containers:
    - image: nginx
      name: web-server
      resources:
        limits:
          memory: "300Mi"
          cpu: "2"

$ kubectl -n project-2 describe quota
Name:                   project-2-quota
Namespace:              project-2
Resource                Used   Hard
--------                ----   ----
configmaps              0      10
limits.cpu              2      10
limits.memory           300Mi  800Mi
persistentvolumeclaims  0      50
pods                    1      10
secrets                 1      10
services                0      10


Kurnetes Limit Range

Scenerio
you already have namspace with resource specification
and node define,
resource quota also define

Use case
we have created 50 pod
1 pod used 4gb ram, now 49 pod have 1 gb ram only

Sol. you can put limitRange on
pod
container
volume

limit range eg. cpu, disk space

1 core = 1000m
4 core = 4000m

KB 1=1000 byte
MB 1=1000 KB
GB 1=1000 MB

KIB 1=1024 byte
MIB 1=1024 KB
GIB 1=1024 MB



 Limit Range on Container

Lab1

create namespace
kubectl create namespace limitrange-demo

set context
kubectl config set-context --current --namespace=limitrange-demo

vim limit-mem-cpu-container.yml

apiVersion: v1
kind: LimitRange
metadata:
  name: limit-mem-cpu-per-container
spec:
  limits:
  - max:
      cpu: "800m"
      memory: "1Gi"
    min:
      cpu: "100m"
      memory: "99Mi"
    default:
      cpu: "700m"
      memory: "900Mi"
    defaultRequest:
      cpu: "110m"
      memory: "111Mi"
    type: Container

kubectl get limitrange

kubectl describe limitrange

Lab2

vi limitrange.yml

apiVersion: v1
kind: Pod
metadata:
  name: busybox11
spec:
  containers:
  - name: busybox-cnt01
    image: busybox
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo hello from cnt01; sleep 10;done"]
    resources:
      requests:
          memory: "100Mi"
        cpu: "100m"
      limits:
        memory: "200Mi"
        cpu: "500m"
    
 
kubectl get pods

kubectl describe pods

delete edit again

vi limitrange.yml

apiVersion: v1
kind: Pod
metadata:
  name: busybox11
spec:
  containers:
  - name: busybox-cnt01
    image: busybox
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo hello from cnt01; sleep 10;done"]
    resources:
      requests:
          memory: "100Mi"
        cpu: "100m"
      limits:
        memory: "200Mi"
        cpu: "900m"
    

Now you will get error, limit is 800m

vi limitrange.yml

apiVersion: v1
kind: Pod
metadata:
  name: busybox11
spec:
  containers:
  - name: busybox-cnt01
    image: busybox
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo hello from cnt01; sleep 10;done"]
    resources:
      requests:
          memory: "100Mi"
        cpu: "99m"
      limits:
        memory: "200Mi"
        cpu: "900m"

Now you will get error cpu limit is 100m

apiVersion: v1
kind: Pod
metadata:
  name: busybox11
spec:
  containers:
  - name: busybox-cnt01
    image: busybox
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo hello from cnt01; sleep 10;done"]
    resources:
      requests:
          memory: "100Mi"
        cpu: "99m"
      limits:
        memory: "2Gi"
        cpu: "700m"

Now you will get error memory limit is 1Gi

apiVersion: v1
kind: Pod
metadata:
  name: busybox11
spec:
  containers:
  - name: busybox-cnt01
    image: busybox
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo hello from cnt01; sleep 10;done"]
    
kubectl describe pod

it will take default min/max
 min:
      cpu: "700m"
      memory: "900Mi"
 max:
      cpu: "110m"
      memory: "111Mi"

Scenario

case 1 - if you define min/max, it will select min/max same
case 2 - if you define min only, min same, then default max will selected.
case 3 - if you define max only, it will take min same as define max, it doest go to default
case 4 - if you dont define anything, it will take min/max from default

Limit Range (Pod)

we can set min max in LimitRange for Pod yml
but we cant set default value as we use to set in container.

vim limit-mem-cpu-pod.yml

apiVersion: v1
kind: LimitRange
metadata:
  name: limit-mem-cpu-per-pod
spec:
  limits:
  - min:
      cpu: "1"
      memory: "1Gi"
    max:
      cpu: "2"
      memory: "2Gi"
    type: Pod

kubectl apply -f limit-mem-cpu-pod.yaml

kubectl get limitrange

kubectl describe limitrange

Lab2

apiVersion: v1
kind: Pod
metadata:
  name: busybox2
spec:
  containers:
  - name: busybox-cnt01
    image: busybox
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo hello from cnt01; sleep 10;done"]
    resources:
      requests:
          memory: "100Mi"
        cpu: "100m"
      limits:
        memory: "200Mi"
        cpu: "500m"
    
kubectl describe pods

Limits
        memory: 200Mi
        cpu: 500m
Requests
        memory: 100Mi
        cpu: 100m


vi

apiVersion: v1
kind: Pod
metadata:
  name: busybox2
spec:
  containers:
  - name: busybox-cnt01
    image: busybox
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo hello from cnt01; sleep 10;done"]
    resources:
      limits:
        memory: "200Mi"
        cpu: "500m"

it will take min/max same

kubectl describe pods
Limits
        memory: 200Mi
        cpu: 500m
Requests
        memory: 100Mi
        cpu: 100m

Lab3

vi

apiVersion: v1
kind: Pod
metadata:
  name: busybox2
spec:
  containers:
  - name: busybox-cnt01
    image: busybox
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo hello from cnt01; sleep 10;done"]
    resources:
      limits:
        memory: "1Gi"
        cpu: "100m"
  containers:
  - name: busybox-cnt02
    image: busybox
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo hello from cnt02; sleep 10;done"]
    resources:
      limits:
        memory: "1Gi"
        cpu: "100m"

kubectl apply -f limit-range-pod-2.yml

kubectl get pods

kubectl describe pods

2 Pod will create,

But if you try to lunch 3 pod with 1G, then total 3Gi and limit set 2Gi for Pod

vi

apiVersion: v1
kind: Pod
metadata:
  name: busybox2
spec:
  containers:
  - name: busybox-cnt01
    image: busybox
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo hello from cnt01; sleep 10;done"]
    resources:
      limits:
        memory: "1Gi"
        cpu: "100m"
  containers:
  - name: busybox-cnt02
    image: busybox
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo hello from cnt02; sleep 10;done"]
    resources:
      limits:
        memory: "1Gi"
        cpu: "100m"
  containers:
  - name: busybox-cnt03
    image: busybox
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo hello from cnt02; sleep 10;done"]
    resources:
      limits:
        memory: "1Gi"
        cpu: "100m"

Lab4

vi

apiVersion: v1
kind: Pod
metadata:
  name: busybox2
spec:
  containers:
  - name: busybox-cnt01
    image: busybox
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo hello from cnt01; sleep 10;done"]
    resources:
      limits:
        memory: "1Gi"
        cpu: "100m"
  containers:
  - name: busybox-cnt02
    image: busybox
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo hello from cnt02; sleep 10;done"]
    resources:
      limits:
        memory: "1Gi"
        cpu: "100m"

kubectl apply -f limit-range-pod-2.yml

kubectl get pods

kubectl describe pods

2 Pod will create,

But if you try to lunch 3 pod with 1G, then total 3Gi and limit set 2Gi for Pod

vi

apiVersion: v1
kind: Pod
metadata:
  name: busybox2
spec:
  containers:
  - name: busybox-cnt01
    image: busybox
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo hello from cnt01; sleep 10;done"]
    resources:
      limits:
        memory: "1Gi"
        cpu: "100m"
      requests:
        memory: "100Mi"
        cpu: "100m"
  containers:
  - name: busybox-cnt02
    image: busybox
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo hello from cnt02; sleep 10;done"]
    resources:
      limits:
        memory: "1Gi"
        cpu: "100m"
      requests:
        memory: "100Mi"
        cpu: "100m"
  containers:
  - name: busybox-cnt03
    image: busybox
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo hello from cnt02; sleep 10;done"]
    resources:
      limits:
        memory: "1Gi"
        cpu: "100m"

we need to set max for each container. becz we set max value in LimitRange 

LimitRange (Volume)

we can set volume limit on PVC
we create PVC and only 500mb give each pod

kubectl get limitrange

apiVersion: v1
kind: LimitRange
metadata:
  name: storagelimits
spec:
  limits:
  - type: PersistentVolumeClaim
    max:
      storage: 2Gi
    min:
      storage: 1Gi

check limitrange
kubectl describe limitrange
you will see error about pv not exist

check pvc
kubectl describe pvc

kubectl get pv

install nfs
yum install nfs* -y

after that create pv 

Kubernetes Auto scalling

type
Horizontal
Vertical

Horizontal scalling
min=2,
condition utilization 50% and above Pod
max=10

Vertical Scalling
memory-2GB
cpu 2 core
utilization 50% and above increase
memory-4GB
cpu 3 core
In vertical autoscalling, when it scall,
it will create higher scaling machine and then switch service over there.



but how its monitor.
Kubernetes have its own monitoring component
matrix server,
Metric server configure on pod itself

Lab

Install Metrics Server

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.5.2/components.yaml

 kubectl get po -n kube-system | grep metric
metrics-server-68d9544479-tqhs9            0/1     Running            0          53s

check metric server created
kubectl top pod --all-namespaces

create pod
kubectl run hpa-demo-web --image=k8s.gcr.io/hpa-example --requests=cpu=200m --port=80 --replicas=1

check pod created
kubectl get pod | grep hpa-demo-web

expose create service
kubectl expose deployment hpa-demo-web --type=NodePort

check running service
kubectl get service | grep hpa-demo-web

stress the pod

go inside the pod container
kubectl run -it deployment-for-testing --image=busybox /bin/sh

wget -q -O- http://hpa-demo-web.default.svc.cluster.local

echo "while true"; do wget -q -O- http://hpa-demo-web.default.svc.cluster.local ; done" > loops.sh

chmod +x /loops.sh

sh /loops.sh

apply horizontal Pod autoscaling

kubectl get pods | grep hpa-demo-web

create autoscale
kubectl autoscale deployment hpa-demo-web --cpu-percent=5 --min=1 --max=5

check all pod
kubectl get all

view the hpa in action
kubectl get pod | grep hpa-demo-web

check current horizental scaling status
watch kubectl get hpa

learn setup Kubernetes on
Azure AKS- video-40,41
AWS Kops Cluster - video42
GCP - video43,44,45

----------------------------------------------------------------------------------------------------------------

Namespaces

Kubernetes uses namespaces to organize objects in the cluster. You can think of each namespace as a folder that holds a set of objects. By default, the kubectl command-line tool interacts with the default namespace. If you want to use a different namespace, you can pass kubectl the --namespace flag. For example, kubectl --namespace=mystuff references objects in the mystuff namespace.

Contexts

If you want to change the default namespace more permanently, you can use a context. This gets recorded in a kubectl configuration file, usually located at $HOME/.kube/config. This configuration file also stores how to both find and authenticate to your cluster. For example, you can create a context with a different default namespace for your kubectl commands using:

$kubectl config set-context my-context --namespace=project-1

This creates a new context, but it doesn’t actually start using it yet. To use this newly created context, you can run:

$ kubectl config use-context my-context

How to check current context?

kubectl describe sa default | grep Namespace

kubectl config get-contexts

 

How to check logs at the time of run / apply kubectl?

kubectl --v=8 apply -f vicidial-service.yml 

 

How to Add non-root users to a managed Kubernetes cluster?

  Generating private key and certificate

https://www.youtube.com/watch?v=8E_sS-bbs5g

https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/


I ended up fixing this by manually editing my kube config and replacing the value in client-certificate-data with the string in status.certificate. I’m thinking the crt file the I created with the contents of the certificate needed to be in PEM format and wasn’t.

----------------------------------------------------------------------------------------------------------------

 
Question:
Issues you 3 issues you faced in kubernetes env recently?
Kubernetes Error?
Crash loopbackOff
 

Explain Pod lifecycle?

Pod Lifecycle

1.Pending-------|
2.Creating------|---failed, unknown, Crash loopbackoff
3.Running-------|
4.Succeeded

failed - pod can be failed from pending, creating, running state, reason - its crashes any reason like s/w h/w problems

unknown - if we lost the communication with worker node, where the pod is running, during this period we can't know the state of the pod

Crash loopbackoff - pod crashed and restared many time during this period the pod will be Crash loopbackoff

Pending - scheduler relook the appropriate node to create this pod, during this time pod will be in the pending state,
Creating - The scheduler will select work node, to create the pod, now the pod in creating state, when the worker node selected by scheduler to host the pod, so now the worker node pull the docker images and run the container
Running - during running state pod is up, all the container inside this pod up and running ,
Succeeded - Let say after some period of time the main purpose of pod was achived, all the container inside pod will be terminated 

 

difference b/w Statefulset vs Deployment?

AspectDeployment StatefulSet
------ ---------- -----------
Data persistence Stateless Stateful
Pod name and identity Pods are assigned an ID that consists of the deployment name and a random hash to generate a temporarily unique identity Each pod gets a persistent identity consisting of the StatefulSet name and a sequence number
Interchangeability Pods are identical and can be interchanged Pods in a StatefulSet are neither identical nor interchangeable
Behavior A pod can be replaced by a new replica at any time Pods retain their identity when rescheduled on another node
Volume claim All replicas share a PVC and a volume Each pod gets a unique volume and PVC
Allowed volume access mode(s) ReadWriteMany and ReadOnlyMany ReadWriteOnce
Pod interaction Requires a service to interact with the pods The headless service handles pod network identities
Order of pod creation Pods are created and deleted randomly Pods are created in a strict sequence and cannot be deleted randomly

 For more

https://www.containiq.com/post/statefulset-vs-deployment

 

 what is headless service ?

Headless service is a regular Kubernetes service where the spec. clusterIP is explicitly set to "None" and spec. type is set to "ClusterIP


apiVersion: v1
kind: Service
metadata:
  name: darwin
  labels:
    app: darwin #should match .spec.metadata.app in the statefulset template
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: darwin

 

green blue deployment strategy? or what strategy you use in your live env?

Blue-green Deployments with Kubernete 

blue-green deployments in action. Imagine we have version v1 of awesome application called myapp, and that is currently running in blue. In Kubernetes, we run applications with deployments and pods.

Sometime later, we have the next version (v2) ready to go. So we create a brand-new production environment called green. As it turns out, in Kubernetes we only have to declare a new deployment, and the platform takes care of the rest. Users are not yet aware of the change as the blue environment keeps on working unaffected. They won’t see any change until we switch traffic over from blue to green.

Kubernetes networking, how communication happening b/w pods and all?


kubernetes control plain/master node and slave node componenet how eco system is working?

kubernetes RBAC - Role-based access control ?
access to resource on the bases of roles of individual user within in your oranization.

RBAC authorized the user, allowing them to access polices

Role Binding/
Cluster RoleBinding_______Service account
                                |_______Role/ Cluster Role

 

once you create role you need to bind it with Service account

Example 

The RBAC API declares for kinds of Kubernetes object.

1. Role
2. ClusterRole
3. RoleBinding
4. ClusterRoleBinding

exam:
Create Service account
apiVersion: v1
kind: ServiceAccount
metadata:
  name: foo
  namepace: test

kubectl apply -f serviceaccount.yaml

kubectl auth can-i --as system:serivaccount:test:foo get pods -n test
no

vim role.yml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: test
  name: testadmin
rules:
- apiGroups: ["*"]
  resources: ["*"]
  verbs: ["*"]

apigroup - * means core apigroup
resources: - * all the thing fall into it, like pod, deployment
verbs: - * not only get, but delete, update create all fuctions

Basic role binding

rolebinding.yml
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: testadminbinding
  namespace: test
subjects:
- kind: ServiceAccount
  name: foo
  apiGroup: ""
roleRef:
  kind: Role
  name: testadmin
  apiGroup: ""


test again  

kubectl auth can-i --as system:serivceaccount:test:foo get pods -n test
yes
kubectl auth can-i --as system:serivceaccount:test:foo create pods -n test
yes
kubectl auth can-i --as system:serivceaccount:test:foo create deploy -n test
yes
kubectl auth can-i --as system:serivceaccount:test:foo create deploy -n kube-system
no

ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: testadminclusterbinding
subjects:
- kind: ServiceAccount
  name: foo
  apiGroup: ""
  namespace: test
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: ""

test again
kubectl auth can-i --as system:serivceaccount:test:foo create deploy -n kube-system
yes
kubectl auth can-i --as system:serivceaccount:test:foo create deploy -n default
yes

it can do everything in that specific cluster

Ingress Setup

LoadBalancer
limitations

1. The problem is that one load balancer can only point towards a single kubernetes service object.
so if you have 100 microservices >> than you need 100 loadbalancer

2. if you have web service and running on different url
test.com/a
        /b
        /c
when you need to go different service, before ingress you need to do internal path resolution



Ingress controller is resposible for fulfilling the ingress
what is ingress controller
In order for the Ingress resource to work, the cluster must have an ingress controller running.
Lots of ingress controller available:
nginx ingress controllers

1st condition

Create an ingress controller in GKE
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.43.0/deploy/static/provider/cloud/deploy.yaml
kubectl get pods -n ingress-nginx
kubectl get svc -n ingress-nginx

1. sample app yaml below -run the sample app yml
2. also find Ingress Resource yaml run that one after that
3. check ingress
  kubectl get ingress
  kubectl describe nginx basic-ingress
  you will see the ip added
4. go to the ip it should show you the app

 

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
  namespace: default
spec:
  selector:
    matchLabels:
      run: web
  template:
    metadata:
      labels:
        run: web
    spec:
      containers:
      - image: gcr.io/google-samples/hello-app:1.0
        imagePullPolicy: IfNotPresent
        name: web
        ports:
        - containerPort: 8080
          protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  name: web
  namespace: default
spec:
  ports:
  - port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    run: web
  type: NodePort

Ingress Resource yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: basic-ingress
spec:
  backend:
    serviceName: web
    servicePort: 8080

2nd condition
Sample Fanout

A fanout configuration routes traffic from a single IP address to more than one service, based on the HTTP URI being requested.

below yaml will deploy 2 deployment and 2 service web1 and web2 

A fanout configuration routes traffic from a single IP address to more than one service, based on the HTTP URI being requested.

below yaml will deploy 2 deployment and 2 service web1 and web2

1. 2 sample app yaml below web1 and web2 -run the sample apps yml
2. also find Ingress Resource yaml, that containing Name-Based hosting and run that
   For name based hosting will be providing the hostname configuration in our yaml file
3. check ingress
  kubectl get ingress
  kubectl describe nginx basic-ingress
  you will see the ip added
4. To access this url, edit /etc/hosts file on Mac
   Add a line
   IP of ingress   test.com
   Now try to access on the browser

 

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
  namespace: default
spec:
  selector:
    matchLabels:
      run: web
  template:
    metadata:
      labels:
        run: web
    spec:
      containers:
      - image: gcr.io/google-samples/hello-app:1.0
        imagePullPolicy: IfNotPresent
        name: web
        ports:
        - containerPort: 8080
          protocol: TCP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web2
  namespace: default
spec:
  selector:
    matchLabels:
      run: web2
  template:
    metadata:
      labels:
        run: web2
    spec:
      containers:
      - image: gcr.io/google-samples/hello-app:2.0
        imagePullPolicy: IfNotPresent
        name: web2
        ports:
        - containerPort: 8080
          protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  name: web
  namespace: default
spec:
  ports:
  - port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    run: web
  type: NodePort

---
apiVersion: v1
kind: Service
metadata:
  name: web2
  namespace: default
spec:
  ports:
  - port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    run: web2
  type: NodePort


apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: host-ingress
spec:
  rules:
  - host: "test.com"
    http:
      paths:
      - path: /test
        backend:
          serviceName: web
          servicePort: 8080
  - host: "abc.com"
    http:
      paths:
      - path: /abc
        backend:
          serviceName: web2
          servicePort: 8080

https://www.youtube.com/watch?v=LYBGBuaOH8E 

https://devops4solutions.com/setup-kubernetes-ingress-on-gke/



if your resource taking more cpu and core in kubernetes how you control it
and what happen if limit will reach?

Kubernetes networking and security ?



Kubernetes architecture/components:

  1. API Server: Central management point for the Kubernetes cluster, handling all administrative tasks and communication.

  2. Controller Manager: Maintains the desired state of the cluster by monitoring and reconciling actual state with the desired state defined in the Kubernetes API server.

  3. Scheduler: Assigns pods to nodes based on resource availability and other constraints, ensuring optimal resource utilization.

  4. etcd: Distributed key-value store used as the cluster's database to store all Kubernetes cluster data, including configuration, state, and metadata.

  5. Kubelet: Primary node agent responsible for managing the state of the node and maintaining application containers based on pod specifications.

  6. Kube-proxy: Manages network communication to and from the pod network, handling routing and load balancing for services.

  7. Container Runtime: Software responsible for running containers, supporting multiple runtimes like Docker, containerd, and CRI-O.

  8. kube-proxy: Maintains network rules for communication between pods and external clients, performing NAT and load balancing.

  9. Kubernetes API: Interface through which all administrative tasks and communication with the cluster are performed, exposing a RESTful interface for programmatically interacting with the cluster.


    kube-apiserver - all communication happening by it, it communicate with all in master component and worknode also.

    scheduler - schedule pods - on nodes - on basis of resource available on node - and resource asked for the pod - and inform to etcd via apiserver

    kubelet - is a agent that  connect master and work node communication, it follow instruction of control from master node, for pod scheduling.

    etcd - senstive info - pod info,pass,cluster config etc.

    controller - maintain desire state - pod - 4, if pod deaded, it will reschdule that

    Kube-proxy - dns manager - what ip should have pod, how it could interact with service, any request coming route to correct pod 


    LoadBalancer:

  10. Exposing web applications to the internet: When you have a web application running inside Kubernetes and you want to expose it to the internet, you can use a LoadBalancer service. For example, if you have a frontend web application or an API service that needs to be accessible from outside the cluster, a LoadBalancer service can provide a public IP address and distribute incoming traffic across multiple backend pods.

  11. High availability and scalability: LoadBalancer services are often used in scenarios where high availability and scalability are essential. The cloud provider's load balancer automatically handles traffic distribution and scaling based on demand, ensuring that your application remains available and responsive.

Ingress:

  1. Multiple services under a single IP address: Ingress is useful when you have multiple backend services running inside the cluster and you want to expose them under a single IP address. Instead of provisioning a separate LoadBalancer for each service, you can use Ingress to route traffic based on URL paths or hostnames to different backend services.

  2. Path-based routing: Ingress allows you to route traffic to different services based on URL paths. This is useful when you have multiple microservices or API endpoints running in the cluster and you want to expose them through a single entry point. For example, you can route /api requests to one service and /app requests to another service.

  3. TLS termination and SSL offloading: Ingress supports TLS termination, allowing you to terminate SSL/TLS connections at the edge of the cluster. This simplifies certificate management and offloads the encryption/decryption workload from backend services, improving performance.

In summary, use LoadBalancer services when you need to expose a single service to the internet with basic TCP-level load balancing, and use Ingress when you need more advanced routing capabilities, such as path-based routing, TLS termination, and exposing multiple services under a single IP address.

  1. Network Policies:
    • Network Policies are Kubernetes resources used to define rules for how groups of pods are allowed to communicate with each other and other network endpoints.
    • They provide fine-grained control over network traffic within the cluster, allowing administrators to define ingress and egress rules based on labels, namespaces, and IP addresses.
    • Network Policies help enhance security by enforcing segmentation and isolation between different components of an application.
    • Not all Kubernetes distributions support Network Policies out of the box, and they require a network plugin that supports the Kubernetes Network Policy API, such as Calico or Cilium.
https://www.youtube.com/watch?v=0FijKLlOcEE

Network policies - use what pod should ingress or egress 

Match Labels Selector: This type of selector matches pods based on their labels.

Suppose you have a microservices architecture where different components of your application are deployed as separate pods. You want to enforce a network policy that allows communication only between pods belonging to the same service. You can achieve this by labeling pods with their respective service names and then using a Network Policy with a match labels selector to allow traffic only between pods with matching labels.


apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-same-service
spec:
  podSelector:
    matchLabels:
      app: frontend
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: frontend
 

Namespace Selector: This selector matches pods based on the namespace to which they belong. 

Imagine you have multiple teams working on different projects within the same Kubernetes cluster, and you want to enforce network policies specific to each team's namespace. You can create Network Policies with namespace selectors to define rules that apply only to pods within a particular namespace.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-from-backend-namespace
spec:
  podSelector:
    matchLabels:
      app: frontend
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          team: backend
 

Pod Selector: This selector matches pods based on their attributes such as namespace, labels, and annotations.

Consider a scenario where you have pods running in different environments (e.g., production, staging, development) within the same namespace, and you want to restrict traffic between pods based on their environment. You can use a Network Policy with a pod selector to define rules that apply only to pods with specific labels representing their environment.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-from-staging
spec:
  podSelector:
    matchLabels:
      environment: staging
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: frontend
 

    Kubernetes Storage:
        Persistent Volumes
        Persistent Volume Claims

 https://www.youtube.com/watch?v=6GkEFqdjdRM

 PV - PersistentVolume - it can create by admim.- its repersentating about PV we creating on any cloud etc.


 

PVC - its request for the PV created earlier

 

 ReadWriteOnce (RWO), 

ReadOnlyMany (ROX), 

ReadWriteMany (RWX)

 what basis pv and pvcs bind, it check 2 parameter - size - accessModes - incase both criteria match it will bind the pv with pvcs, - it also have data

 Above process is static provisioning.

Dynamic Provisioning

Administrator created StorageClass - 

 and user create PVCs for his requirment, and mention the storage class in that. - once PVCs request come for the storage, it will create the storage on cloud- and it will create PV automatically - and kubernetes bind that PV with PVC, and data start storing on that storage


  Bound and Pending States: PVCs can be in either a bound state (indicating that they are successfully bound to a PV) or a pending state (indicating that Kubernetes is still looking for a suitable PV to bind).

 RBAC

Whenever any request send to kuberentes it will check authentication / permission

Request can send by 2 type normal user / or service account

 

Authentication way in kube
Static password file -
Static token file - token pass at request time
Certificate way - pass certificate kube-apiserver will check validity of certifcate and authenticate

if 1st step complete regarding authentication, 2nd step of request permission

Authorization modes
Attribute Based Access control - assign to policy object to single user
RBAC - is for group policy, you will create a role and assign to people that role and do rolebase binding



Role - we can create a set of permission. its namespaced object, means it define what action this role can take in this namespaced

Namespaced object - Pods, Services, Configmap, PVC, Secrets, Deployment.


ClusterRole
Its for clusterwide objects - Nodes, PV, Namespace, ClusterRole, ClusterRoleBinding
If you wan to a user get, watch list all cluster pods, you can create ClusterRole for that

RoleBinding - we use to bind permission to single user a group of user the role we created.

In rolebinding subject we can mention - user, group, serviceaccount

how to check the assigned permission is working on rolebinding
kubectl auth can-i get pod --as jack
yes

Pod security policy

what is pod security policy ?
if any pod have some harmful content, or any hacker got able to run harmful container or code in that. that is possible,
so we should have pod security policy, that prevent to create that kind of pod, like dont have priveleged container


Security context

what is security context in pod and container ?
its least privilege principal, we should give only necessary permission inside pod, dont assign root permission
mention security context in each pod and container.


Operators

In Kubernetes, an operator is a method of packaging, deploying, and managing applications using custom controllers. Operators extend Kubernetes' declarative API to manage complex, stateful applications, automating tasks such as deployment, scaling, and lifecycle management.


what is Istio?
its a tool in k8s help to service to service communication, Service mesh

Istio Architecture

install istio after downloading it
run applications like 3-4 application in different service.
Config Envoy Proxy injection
label namespace with istio-injection=enabled
you dont need to inject proxy by manually adding that, you just create namespace label, and recreate the pods, deloyment again.

Envoy proxy container is a side car contain.


Install Istio Integrations for monitoring & data visualization
Integrations in Istio
Cert-manager
Jaeger
Prometheus
Grafana
Kiali
Zipkin

these are addons  that written in yaml, so we need to execute yaml file

what is jaeger/tracing - open source, end-to-end distributed tracing -
its baiscally service for tracing microservices.

Zipkin is an alternative to jaeger

kiali - is single dashboard where we can see graph application, services, istio config
best part it will show you which service communicate to which service geographically







No comments:

Post a Comment