Tuesday, May 30, 2023

Project: - Sample app docker build with Jenkins Store ECR and deploy on EKS cluster

 

How to setup AWS cli and access for aws account.

1.Create a AWS user. 

Add in access group, suppose you want to give full permission like AdmistratorAccess do like as below.



2. create access key for that user 



3. Install aws cli on your local computer, go to below doc and install according to your OS.

https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html


 

 

4. Configure aws cli Access key

aws configure

 

Done now you are connected with your AWS account via AWS cli.

 

Now Install Terraform

https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli 

after install terraform verify it

terraform --help

now Create 1 folder go inside that folder and start terraform work.

Let suppose you have created 1 terraform script as below, store it inside that folder.

script name suppose - aws_ec2_jenkins_docker_install.tf

provider "aws" {
region = "us-east-2" # Update with your desired region
}

resource "aws_key_pair" "jenkins_keypair" {
key_name = "jenkins-keypair"
public_key = file("/root/.ssh/id_rsa.pub") # Replace with the path to your public key
}

resource "aws_instance" "jenkins_instance" {
ami = "ami-03a0c45ebc70f98ea" # Replace with the desired AMI ID
instance_type = "t2.small" # Replace with the desired instance type
key_name = aws_key_pair.jenkins_keypair.key_name
vpc_security_group_ids = [aws_security_group.jenkins_sg.id]

user_data = <<-EOF
#!/bin/bash
sudo apt-get update
sudo apt-get install -y docker.io
sudo docker pull jenkins/jenkins:lts
    sudo cat <<EOF >Dockerfile
 
FROM jenkins/jenkins:lts

USER root

# Install Docker CLI dependencies
RUN apt-get update \
&& apt-get install -y apt-transport-https ca-certificates curl gnupg2 software-properties-common

# Add Docker's official GPG key
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -

# Add the Docker repository
RUN echo "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list

# Update package lists and install Docker CLI
RUN apt-get update \
&& apt-get install -y docker-ce-cli

# Switch back to the Jenkins user
USER jenkins
 
EOF
sudo docker build -t jenkins .
sudo docker run -d -p 8080:8080 -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts

sleep 30 # Wait for Jenkins to start

jenkins_password=$(sudo docker exec $(sudo docker ps -q --filter "ancestor=jenkins/jenkins:lts") cat /var/jenkins_home/secrets/initialAdminPassword)
echo "Jenkins initial admin password: $jenkins_password"
EOF
}

resource "aws_eip" "jenkins_eip" {
instance = aws_instance.jenkins_instance.id
}

resource "aws_security_group" "jenkins_sg" {
name = "jenkins-sg"
description = "Security group for Jenkins"
vpc_id = "vpc-0d67054cd23a8f716" # Replace with the desired VPC ID

ingress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}

resource "aws_eip_association" "jenkins_eip_association" {
instance_id = aws_instance.jenkins_instance.id
allocation_id = aws_eip.jenkins_eip.id
}

output "jenkins_login_password" {
value = aws_instance.jenkins_instance.user_data
}

output "jenkins_url" {
value = "http://${aws_eip.jenkins_eip.public_ip}:8080" 
}
 
 
 
 

now from inside that folder

run below cmd

terraform init  - it will download all plugin according to your provider

terraform plan - before execution of script you should run plan that will show you infra that will create if you execute that script.

terraform apply - once you apply it will create whole infra you defined inside the script.

now you can go to aws and check.

terraform destroy - it will delete all infra created by terraform apply.


After jenkins started installed below plugins on jenkins

Docker Pipeline

 Amazon ECR plugin


Configure Gitlab in jenkins plugin

  Dashboard -> Manage Jenkins->  Configure System


create pipeline in Jenkins

 

 

Gitlab connection

 

Pipeline - Pipeline script from SCM

repo - https://gitlab.com/manjeetyadav19/sample_microservice_cicd_withjenkins_deployoneks.git

Branch specifier */main

Explain repo code.

we are building a simple nodejs app, that will print a message on web URL - Hello World-

index.js

const http = require('http');

const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end('Hello, World!\n');
});

const port = process.env.PORT || 3000;
server.listen(port, () => {
console.log(`Server running on port ${port}`);
});
 
build it and try on local 1st if you want to try.
npm init
it will create a package.json and package.lock.json file 
 npm start and node index.js
 go and check on the browser http://localhost:3000
After you build the code, Write a docker file to create above app as microservice.
Dockerfile 
# Use the official Node.js image as the base
FROM node:14

# Set the working directory
WORKDIR /app

# Copy package.json and package-lock.json to the working directory
COPY package*.json ./

# Install dependencies
RUN npm install

# Copy the web app code to the working directory
COPY . .

# Expose the port that the app will run on
EXPOSE 3000

# Start the app
CMD [ "node", "index.js" ]



 and if you want you can test as docker container also on local
docker build -t sampleapp . 
docker run -d -p 3000:3000 sampleapp
docker ps    - you will see running container sampleapp 
go and check the browser you should able to see 
same hello world on browser. 
Now create a Jenkinsfile for the CI and push image to ECR repository.
Jenkinsfile
pipeline {
agent any
environment {
ECR_REGISTRY = "896757523510.dkr.ecr.us-east-2.amazonaws.com" // Replace with your ECR registry URL
ECR_REPO = "sampleapp" // Replace with your ECR repository name
IMAGE_TAG = "latest" // Replace with your desired image tag

}

stages {
stage('Checkout') {
steps {
checkout scm
}
}

stage('Build Docker Image') {
steps {
script {
def dockerImage = docker.build("${ECR_REGISTRY}/${ECR_REPO}:${IMAGE_TAG}", "-f Dockerfile .")
}
}
}


stage('Push Docker Image to ECR') {
steps {
script {
def dockerWithRegistry = { closure ->
docker.withRegistry("https://${ECR_REGISTRY}", 'ecr:us-east-2:aws-access-key', closure)
}

dockerWithRegistry {
sh "docker push ${ECR_REGISTRY}/${ECR_REPO}:${IMAGE_TAG}"
}
}
}
}
}
}
  
For above jenkins file pls define below value
Create ECR repository also, as above.  
ECR_REGISTRY = "896757523510.dkr.ecr.us-east-2.amazonaws.com" // Replace with your ECR registry URL
ECR_REPO = "sampleapp" // Replace with your ECR repository name
IMAGE_TAG = "latest" // Replace with your desired image tag
ecr:us-east-2:aws-access-key'
aws-access-key is a aws credential, pls create in jenkins credential
  
Push all code to gilab repo main branch.
Now build Jenkins - and see jenkins logs if any error if you get.
 
 
 
 
 
 
 
 
 

Saturday, May 27, 2023

Terraform with AWS

How to setup AWS cli and access for aws account.

1.Create a AWS user. 

Add in access group, suppose you want to give full permission like AdmistratorAccess do like as below.



2. create access key for that user 



3. Install aws cli on your local computer, go to below doc and install according to your OS.

https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html


 

 

4. Configure aws cli Access key

aws configure

 

Done now you are connected with your AWS account via AWS cli.

 

Now Install Terraform

https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli 

after install terraform verify it

terraform --help

now Create 1 folder go inside that folder and start terraform work.

Let suppose you have created 1 terraform script as below, store it inside that folder.

script name suppose - aws_ec2_jenkins_docker_install.tf

provider "aws" {
region = "us-east-2" # Update with your desired region
}

resource "aws_key_pair" "jenkins_keypair" {
key_name = "jenkins-keypair"
public_key = file("/root/.ssh/id_rsa.pub") # Replace with the path to your public key
}

resource "aws_instance" "jenkins_instance" {
ami = "ami-03a0c45ebc70f98ea" # Replace with the desired AMI ID
instance_type = "t2.small" # Replace with the desired instance type
key_name = aws_key_pair.jenkins_keypair.key_name
vpc_security_group_ids = [aws_security_group.jenkins_sg.id]

user_data = <<-EOF
#!/bin/bash
sudo apt-get update
sudo apt-get install -y docker.io

sudo docker run -d -p 8080:8080 -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts

sleep 30 # Wait for Jenkins to start

jenkins_password=$(sudo docker exec $(sudo docker ps -q --filter "ancestor=jenkins/jenkins:lts") cat /var/jenkins_home/secrets/initialAdminPassword)
echo "Jenkins initial admin password: $jenkins_password"
EOF
}

resource "aws_eip" "jenkins_eip" {
instance = aws_instance.jenkins_instance.id
}

resource "aws_security_group" "jenkins_sg" {
name = "jenkins-sg"
description = "Security group for Jenkins"
vpc_id = "vpc-0d67054cd23a8f716" # Replace with the desired VPC ID

ingress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}

resource "aws_eip_association" "jenkins_eip_association" {
instance_id = aws_instance.jenkins_instance.id
allocation_id = aws_eip.jenkins_eip.id
}

output "jenkins_login_password" {
value = aws_instance.jenkins_instance.user_data
}

output "jenkins_url" {
value = "http://${aws_eip.jenkins_eip.public_ip}:8080"
}

now from inside that folder

run below cmd

terraform init  - it will download all plugin according to your provider

terraform plan - before execution of script you should run plan that will show you infra that will create if you execute that script.

terraform apply - once you apply it will create whole infra you defined inside the script.

now you can go to aws and check.

terraform destroy - it will delete all infra created by terraform apply.



Thursday, May 18, 2023

DevOps Projects

 Project 1

Multi Tier Web application setup - locally

ARCHITECTURE OF Project services
NGINX
TOMCAT
RABBITMQ
MEMCACHED
MYSQL

Source Code:

https://github.com/devopshydclub/vprofile-project/tree/local-setup


 

User will open the ip address - IP of loadbalancer - Nginx service loadbalancing experience - if your application in java so tomcat you will use to host it, if your application need external storage you will use NFS(Share storage)
user will login on web app on tomcat, that info will store in mysql database
RabbitMQ - is the msg broker, queuing agent connect 2 application together, you can stream the data from this, (msg broker, msg queue)

for each login application will run a query to access user login info - and that request go through mamcache service, mamache is database cache , it will be connected with mysql server.

if 1st time request come it will come to mysql service and that info will store in mamcache , if next time request comes, it will be check in mamcache , its just like browser cache, its database cache 




 Project 2 Lift and shift on AWS cloud



Project 3

Refactoring with AWS

Re- Architect Services for AWS cloud

Architecture to boost agility or improve business continuity.

Scenario
Project services running on physical/virtual/cloud machines.

Varieties of services that powers your project runtime.


Cloud setup

Paas & Saas
IAAC
Flexibility
Automatio

Ease of infra management 

Front end services


Backend Services

Comparsion

Services we use in this project on AWS


Flow Execution








Project 4


github action CI and argocd CD

Find a Sample code for deployment.

  • In our example we are using django code. for github action CI work.
  • download the code from below site: https://code-projects.org/invoice-generator-in-django-with-source-code/
  • Try to build the code on your location and test and it run on your local.
  • Once you see the code running fine. then create Docker image
  • write a dockerfile and build the code, and try to run the docker image.
  • After you satisfied about the code functionality and working fine.
  • Now make git repo. (https://github.com/manjeetyadav19/ci_cd_with_github_argocd1)
  • push the code and dockerfile in github. and Plan for github CI by github action
  • For Github CI you need to make a directory structure in your source code.
  • .github/workflows
  • In workflows directory create a github action yaml file.
    (https://github.com/manjeetyadav19/ci_cd_with_github_argocd1/blob/main/.github/workflows/docker-image.yml)
  • This yaml file you can create with the help of already given template on github actions.
  • and once your github action yaml file success. than you can create a kubernetes cluster or minikube for microservice deployment.
  • In the kubernetes cluster 1st install argocd.
  • https://mycloudjourney.medium.com/argocd-series-how-to-install-argocd-on-a-single-node-minikube-cluster-1d3a46aaad20
  • after its installed go to argocd UI
  • you can link your github repo with ArgoCD and you can create a application where you can define namespace and kubernetes env. for new deployment/services

    once argocd installed in kubernetes cluster now you can write Kubernetes Deployment and service file and upload in your github code.

    So once you update the code in git repo, argo CD will pull the changes from github repo, and it will deploy the deployment and service yaml file on kubernetes, and you can check the sync option argoCD.

Important cmd and issues comes during deployment in K8s

Log in to Docker Hub
On your laptop, you must authenticate with a registry in order to pull a private image.
docker login

The login process creates or updates a config.json
View the config.json file:
cat ~/.docker/config.json
{
    "auths": {
        "https://index.docker.io/v1/": {
            "auth": "c3R...zE2"
        }
    }
}

Create a Secret based on existing credentials
If you already ran docker login, you can copy that credential into Kubernetes:
kubectl create secret generic regcred \
    --from-file=.dockerconfigjson=~/.docker/config.json \
    --type=kubernetes.io/dockerconfigjson  


Note:: if you get error in pod deployment and you see the error ImagePullBackOff/ErrImagePull here check in docker config file cat ~/.docker/config.json 

if v1 showing here and in logs its trying registry url v2

so it is important pls check the error in logs for pod, if its related v1 to v2, so change accordingly and try again.


New setup on EKS

  • install aws cli
  • install kubctl
  • install helm
  • created AWS user and generated Access key under the user need to assign permission to AWS user but i have just give AdminAccess


  • created EKS cluster

eksctl create cluster --name my-cluster --region us-east-1 --zones=us-east-1a,us-east-1b,us-east-1

  • create argocd namespace

kubectl create namespace argocd
 

  • install helm chart for argocd

helm install argocd argo/argo-cd -n argocd
NAME: argocd
LAST DEPLOYED: Thu Jan 25 08:29:42 2024
NAMESPACE: argocd
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
In order to access the server UI you have the following options:

1. kubectl port-forward service/argocd-server -n argocd 8080:443

    and then open the browser on http://localhost:8080 and accept the certificate

2. enable ingress in the values file `server.ingress.enabled` and either
      - Add the annotation for ssl passthrough: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-1-ssl-passthrough
      - Set the `configs.params."server.insecure"` in the values file and terminate SSL at your ingress: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-2-multiple-ingress-objects-and-hosts

After reaching the UI the first time you can login with username: admin and the random password generated during the installation. You can find the password by running:
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
(You should delete the initial secret afterwards as suggested by the Getting Started Guide: https://argo-cd.readthedocs.io/en/stable/getting_started/#4-login-using-the-cli)


  • port forwarding for argocd on local webui

kubectl port-forward service/argocd-server -n argocd 8080:443

  • generate login pass for login on argocd webui

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d


Argocd Settings

connect repo -

Choose your connection method: - https
CONNECT REPO USING SSH -Repository URL paste here

Cluster

Server
Credential
Name
Namespace -

Project

default
manjeet

Default -
Name - default
Source Repository
*

Destination
*

Cluster Resource ALLOW LIST
Kind  Group
*     *

Applications

Create
Application Name - test
Project Name - manjeet

Sync Policy - Manual

Source - repository Url - HTTPS
Path - .

Destination
Cluster URL - https://kubernetes.default.svc
Namespace - manjeet