Monday, July 26, 2021

Ansible - Ansible question and answers

 Ansible

 


 

Ansible - configuration management tool  - ansible is agent-less system like chef no client required, its use SSH directly to communicate

Anisible is an open-source IT configuration management, deployment and orchestration tool, it aims to provide large productivity gains to a wide variety of Automation challenges

can use this tool whether your servers are in on-premises or in the cloud

It turns your code into infrastructure i.e your computing environment has some of the same attributes as your application.


Ansible server --------- node1, node2, node3

like recipe in chef, we have playbook in ansible

Resources in chef, modules in ansible

Advantages
Ansible is free to use.
Ansible is very consistent and lightweight and no constraints regarding the OS or Underlying H/W are present

It is very secure due to its agent-less capabilities and open ssh security features

Ansible does not need any special system administrator skills to install and use it (YAML)

Push mechanism

Disadvantages
Insufficient user interface, through ansible tower is GUI, but it is still in development stages
Cannot achieve full automation by ansible
New to the market, therefore limited support and document is available.

Terms used in Ansible
Ansible server - the machine where ansible is installed and from which all tasks and playbooks will be ran.
Module - Basically, a module is a cmd or set of similar cmds meant to be executed on the client-side.
Task - A task is a section that consists of a single procedure to be completed
Role - A way of organizing tasks and related files to be called is a playback.
Fact - Information fetched from the client system from the global variables with the gather-facts operation

Inventory - file containing data about the ansible client servers
Play - execution of a playbook
handler - task which is called only if a notifier is present

Playbook - It consist code in YAML format, which describes tasks to be executed
Host - Nodes, which are automated by ansible

..........................................................

Create 3 instances in same AZ
Take access of all machines, Now go inside ansible server and download ansible package
wget http://.....rpm

Now do ls

yum install epel...rpm
yum update -y

Now we have to install all the packages one by one

yum install git python python-level python-pip openssl ansible -y

Now go to hosts file inside ansible server and paste private-ip  of node1 & node2
vi etc/ansible/hosts
[demo]
node1 private ip
node2 private ip

Now this hosts file is only working after updating ansible.cfg file

vi etc/ansible/ansible.cfg

uncommented
#inventory
#etc/ansible/hosts
#sudo-user
#root

Now create one user, set passwd in all the three instances

adduser ansible
passwd ansible

Now switch as Ansible user
su - ansible

This ansible user dont have sudo privilege .
visudo

Now go inside this file.
root    ALL=(ALL)    ALL
ansible    ALL=(ALL)    NOPASSWD:ALL

Now do this thing in other nodes also
Now go to ansible server and try a install httpd package as a ansible user

sudo yum install httpd -y

Now establish connection b/w server and node go to another server
ssh 172xxx

Now we have to do some changes in sshd_config file go into ansible server

vi /etc/ssh/sshd_config

Do some changes & save
Do this work in node1 & node2 also, now verify in ansible server


su - ansible
ssh 172xxxx

now it ask for passwd, enter the passwd, after that you will be inside node1

..................................................................

Now do ssh-keygen
ls -a
 cd ./ssh
ls
id_rsa id_rsa_pub

copy to node1, node2

ssh-copy-id ansible@172.31.27.226


54.179.16.139    172.31.27.226
13.229.247.99    172.31.30.185
54.169.186.120    172.31.29.92

ssh -i "AWSSingaporeKey.pem" ec2-user@ec2-54-179-16-139.ap-southeast-1.compute.amazonaws.com
ssh -i "AWSSingaporeKey.pem" ec2-user@ec2-13-229-247-99.ap-southeast-1.compute.amazonaws.com
ssh -i "AWSSingaporeKey.pem" ec2-user@ec2-54-169-186-120.ap-southeast-1.compute.amazonaws.com

"all" pattern refers to all the machines in an inventory
ansible all --list-hosts
ansible <group-name> --list-hosts
ansible <group-name>[0] --list-hosts        1,2,3......-1

groupname[0] pick 1st node
groupname[-1] pick last node
groupname[1-2] pick 2st to 3rd node
groupname1,groupname2 pick multiple group
demo[0-1],test[0-1]

.........................................

1. Ad-hoc cmd        simple linux cmd (temp), but no idempotency
2. Modules        | yaml lang  | single cmd run at a time, 1 module at a time    
3. Playbooks         | yaml lang. | if run more then 1 module is call playbook

Ansible modules and playbooks both have idempotency, and its use setup module, its just like ohai in chef

Ad-hoc cmds are cmds which can be run individually to perform quick functions

These ad-hoc cmds are not used for configuration managment and deployment, becz these cmd are one time usage
The ansible ad-hoc cmd uses the /usr/bin/ansible cmd line tool to automate a single task

Ad-hoc cmd
ansible demo -a "ls"            -a means argument
ansible demo[0] -a "touch file"    
ansible all -a "touch file1"


ansible demo -a "sudo yum install httpd -y"
ansible demo -ba "yum install httpd -y"
ansible demo -ba "yum remove httpd -y"


Ansible Modules
Ansible ships with a number of modules (called module library) that can be executed directly on remote hosts or through playbooks

your library of modules can reside on any machine, and there are no servers, daemon, or databases required

The default location for the inventory file is /etc/ansible/hosts

Ansible Modules
ansible demo -b -m yum -a "pkg=httpd state=present"        install=present, uninstall=absent, update=latest
ansible demo -b -m yum -a "pkg=httpd state=latest"
ansible demo -b -m yum -a "pkg=httpd state=absent"
ansible demo -b -m service -a "name=httpd state=started"
ansible demo -b -m user -a "name=raj"
ansible demo -b -m copy -a "src=file1 dest=/tmp"

copy, command, user, package, service, file, unarchive, lineinfile, firewalld, template


Setup Module
ansible demo -m setup
ansible demo -m setup -a "filter=*ipv4*"
.....................................................

Playbook

playbook is ansible are written in YAML format
It is human readable data serialzation lang. it is commonly used for configuration files.

Playbook is like a file where you write codes. consist of vars, tasks, handlers, files, templates and roles.

Each playbook is composed of one or more 'modules' in a list module is a collection of configuration files.

playbooks are divided into many sections like
Target section - hosts or nodes, defines the host against which playbooks task has to be executed

variables section - define variable

Task section - list of all modules that we need to run in order

YAML

for ansible, nearly every YAML files starts with a list.

Each item in the list is a list of key-value pairs commonly called a directory.

All YAML files have to begin "..."

All members of a list lines must begin with same indentation level starting with "_"

for eg

--- # a list of fruits
    fruits:
    -mango
    -strawberry
    -banana
    -grapes
    -apple    
...

a dictionary is represented in a simple key: value form
for eg
    --- # detail of customer
    customer:
        name: _rajput
        job: _dob

...........................

playbook
vi target.yml
--- # Target playbook
- hosts: demo
  user: ansible
  become: yes
  connection: ssh
  gather_facts: yes
        

Now to execute this playbook
ansible-playbook target.yml

................................
Now create one more playbook in ansible server
vi target.yml
--- # Target playbook
- hosts: demo
  user: ansible
  become: yes
  connection: ssh
 
  tasks:
  - name: install httpd on linux
    action: yum name=httpd state=installed

.....................................................

Variable
Ansible uses variables which are defined previously to enable more flexibility in playbooks and roles
They can be used to loop through a set of given values, access various information likes the host name of a system and replaces certain strings in templates with specific values.

put variable section above task so that we define it first & use it later

now
vi vars.yml
--- # my variable playbook
- hosts: demo
  user: ansible
  become: yes
  connection: ssh

  vars:
    pkgname: httpd    
 
  tasks:
  - name: install httpd on linux
    action: yum name='{{pkgname}}' state=installed

 ...........................

Handler Section
A handle is exactly the same as a task, but it will run when called by another task

OR

handler are just liker regular tasks in an ansible playbook, but are only run if the task contains a notify directive and also indicates that it changed something

handler use where we have dependency, eg task have httpd service install and start, but in case httpd service did nt install, then no point to start service, so there is dependency, so 1st task need to perform 1st

vi handler.yml
--- # handler playbook
- hosts: demo
  user: ansible
  become: yes
  connection: ssh

  tasks:
  - name: install httpd on linux
    action: yum name=httpd state=installed
    notify: restart HTTPD

  handlers:
  - name: restart HTTPD
    action: service name=httpd state=restarted
.........................................................

Dry run
check whether the playbook is formatted correctly

ansible-playbook handler.yml --check

Loops
sometimes you want to repeat a task multiple times, in computer programming, this is called loops, common ansible include changing Ownership on several files and/or directives with the files module, creating multiple, users with the user module , and repeating a polling step until certain result is reached.

vi loops.yml
--- # loops playbook
- hosts: demo
  user: ansible
  become: yes
  connection: ssh

  tasks:
   - name: add a list of users
     user: name='{{item}}' state=present
     with_items:
             - manjeet
             - yadav
             - rohit
             - sonam
             - suresh
..................................

Conditions
Whenever we have diff diff scenarios,
we put conditions according to the scenario

when statement
sometimes you want to skip a particular cmd in a particular node
vi condition.yml
--- # loops playbook
- hosts: demo
  user: ansible
  become: yes
  connection: ssh

  tasks:
   - name: install httpd service centos
     command: yum -y install httpd
     when: ansible_os_family == "RedHat"
   - name: install apach service on ubuntu
     command: apt-get -y install apache2
     when: ansible_os_family == "Debian"

.....................................

Vault    - passwd protected

Ansible allows keeping sensitive data such as passwords on keys in encrypted files, rather than a plaintext in your playbook

creating a new encrypted playbook
ansible-vault create vault.yml

editing the encrypted playbook
ansible-vault edit vault.yml

To change the password
ansible-vault rekey vault.yml

To encrypt an existing playbook
ansible-vault encrypt target.yml

To decrypt an encrypt playbook
ansible-vault decrypt target.yml

Run vault playbook

ansible-playbook vault.yml --ask-vault-pass

.............................................


ansible-galaxy init rolename

Roles structure

dummy/
├── defaults
│   └── main.yml
├── files
├── handlers
│   └── main.yml
├── meta
│   └── main.yml
├── README.md
├── tasks
│   └── main.yml
├── templates
├── tests
│   ├── inventory
│   └── test.yml
└── vars
    └── main.yml


Default: Contains the default variables that are going to be used by this role.
Files: Contains files that can be deployed by this role. It contains files that need to be send to the hosts while configuring the role
Handlers: Contains handlers which may be used by this role or even anywhere outside this role.
Meta: Defines metadata for this role. Basically, it contains files that establish role dependencies.
Tasks: Contains the main list of tasks that are to be executed by the role. It contains the main.yml file for that particular role.
Templates: Templates contains files which can be modified and added to the host being provisioned Jinja2(template lang) is used to achieve the modifications

Test: This directory is used to integrate testing with Ansible Playbooks.

Var: This directory consists of other variables that are going to be used by the role. These variables can be defined in your playbook, but it's a good habit to define them in this section.


Roles
We can use two techniques for recurring a set of tasks: includes, and roles

Roles are good for organizing tasks and encapsulating data needed to accomplish those tasks

we can organize playbooks into a directory structure called roles

            Ansible Roles
            

adding more and more functionality to the playbook will make it difficult to maintain in a single file.

Roles
Default: It store the data about role/application default variables e.g. - if you want to run to port 80 or 8080 then variable needs to define in this path.

Files: it contains files need to be transferred to the remote VM (state files)

Handler: they are trigger or task we can segregate all the handler required in playbook

Meta: this directory contain files that establish roles dependencies  eg. auther name, supported platform, dependencies if any

Tasks: it contains all the tasks that is normally in the playbook, eg installing packages and copies files also.

Vars: Variables for the role can be specified in this directory and used in your configuration files both vars and default store variables



.............................
    |        |
    master.yml    roles(dir)
    |        |
    target        myroles(sub dir)
    |        |
    roles        task(again sub dir)(main.yml)
    |        |
    myroles        var(again sub dir)(main.yml)
            |
            handler(again sub dir)(main.yml)

Roles

mkdir -p playbook/roles/webserver/tasks
tree

touch playbook/roles/webserver/tasks/main.yml

touch playbook/master.yml

vi playbook/roles/webserver/tasks/main.yml
- name: install httpd
  yum: pkg=httpd state=latest


vi master.yml
--- # master playbook
- hosts: demo
  user: ansible
  become: yes
  connection: ssh

  roles:
          - webserver

 

 

Ansible modules:

 

1. copy
The copy module allows you to copy a file from the Ansible control node to the target hosts. In addition to copying the file, it allows you to set ownership, permissions, and SELinux labels to the destination file. Here's an example of using the copy module to copy a "message of the day" configuration file to the target hosts:

- name: Ensure MOTD file is in place
  copy:
    src: files/motd
    dest: /etc/motd
    owner: root
    group: root
    mode: 0644


2. template
The template module works similarly to the copy module, but it processes content dynamically using the Jinja2 templating language before copying it to the target hosts.

- name: Ensure MOTD file is in place
  template:
    src: templates/motd.j2
    dest: /etc/motd
    owner: root
    group: root
    mode: 0644

3. user
The user module allows you to create and manage Linux users in your target system. This module has many different parameters, but in its most basic form, you can use it to create a new user.

- name: Ensure user ricardo exists
  user:
    name: ricardo
    group: users
    groups: wheel
    uid: 2001
    password: "{{ 'mypassword' | password_hash('sha512') }}"
    state: present

4. package
The package module allows you to install, update, or remove software packages from your target system using the operating system standard package manager.

- name: Ensure Apache package is installed
  package:
    name: httpd
    state: present

5. service
Use the firewalld module to control the system firewall with the firewalld daemon on systems that support it, such as Red Hat-based distributions.

- name: Ensure SSHD is started
  service:
    name: sshd
    state: started

6. firewalld
Use the firewalld module to control the system firewall with the firewalld daemon on systems that support it, such as Red Hat-based distributions.

- name: Ensure port 80 (http) is open
  firewalld:
    service: http
    state: enabled
    permanent: yes
    immediate: yes

7. file
The file module allows you to control the state of files and directories—setting permissions, ownership, and SELinux labels.

- name: Ensure directory /app exists
  file:
    path: /app
    state: directory
    owner: ricardo
    group: users
    mode: 0770

8. lineinfile
The lineinfile module allows you to manage single lines on existing files. It's useful to update targeted configuration on existing files without changing the rest of the file or copying the entire configuration file.

- name: Ensure host rh8-vm03 in hosts file
  lineinfile:
    path: /etc/hosts
    line: 192.168.122.236 rh8-vm03
    state: present

9. unarchive
Use the unarchive module to extract the contents of archive files such as tar or zip files. By default, it copies the archive file from the control node to the target machine before extracting it. Change this behavior by providing the parameter remote_src: yes.

- name: Extract contents of app.tar.gz
  unarchive:
    src: /tmp/app.tar.gz
    dest: /app
    remote_src: yes

10. command
The command module is a flexible one that allows you to execute arbitrary commands on the target system. Using this module, you can do almost anything on the target system as long as there's a command for it.

- name: Run the app installer
  command: "/app/install.sh"

Placing the task on ServerB

- hosts: ServerB
  tasks:
    - name: Transfer file from ServerA to ServerB
      synchronize:
        src: /path/on/server_a
        dest: /path/on/server_b
      delegate_to: ServerA

This uses the default mode: push, so the file gets transferred from the delegate (ServerA) to the current remote (ServerB).

Placing the task on ServerA

- hosts: ServerA
  tasks:
    - name: Transfer file from ServerA to ServerB
      synchronize:
        src: /path/on/server_a
        dest: /path/on/server_b
        mode: pull
      delegate_to: ServerB
This uses mode: pull to invert the transfer direction.
 
What is configuration management?

Do you think ansible is better than other configuration management tools? If yes, why ?

Can you write an ansible playbook to install httpd service and get it running?

How ansible helped your organization?

what is ansible dynamic inventory?

what is ansible tower and have you used it ? if yes, whay?

How do you manage the RBAC of users for ansible tower?

what is ansible galaxy command and why is it used for ?

can you explain me structure of Ansible playbook using roles?

what are handlers in ansible and why are they used?

i would like to run a specific set of tasks only on windows vms and not linux vms is it possible?

Does ansible support parallel execution of tasks?

what is the protocol that Ansible use to connect to windows vms?

Can you place them in the order of precedence?
playbook group_vars, role vars and extra vars

How do you handle secrets in Ansible?

Can we use ansible for IaC ? Infrastructure as code? If yes, can you compare it with any other Iac tools like terraform?

Can you talk about a ansible playbook, how it helped your company?

what do you think that Ansible can improve? 

Docker - Docker question & answer

Docker 


docker id a adv. version of virtualization


Docker Hub
    |
    |
container/OS
docker engine
    OS
   H/W


VM    VM
OS    OS
EXSI - Hyper
Physical H/W


Docker is an source centralized platform designed to create deploy and run applications.
Docker uses container on the host OS to run applications. It allows applications to use this same linux kernal as a system on the host computer, rather than creating a while virtual OS

We can install docker on only OS but docker engine runs natively on linux distribution
Docker writtern in 'go' lang.
Docker is a tool that performs OS level virtualization, also known as containerization
Before docker many users faces that problem that a particular code is running in the developer's system but in the users system
Docker     is set of platform as a service that was OS level virtualization whereas VMware was H/W level virtualisation


Advantages of Docker
No Pre-allocation of RAM
CI Efficiency - docker enables you to build a container image and use that same image across every step of the deployment process
less cost


Developer
   |
   |
   |
Docker file ---Docker engine/daemon(image)----Docker Hub(Registry)
            |
            |
            container





Docker Daemon -
Docker daemon runs on the host OS
It is responsible for running containers to, manages docker services.
Docker Daemon can communicate with other daemons

Docker client
Docker users can interact with docker daemon through a client
Docker client uses commands and Rest API to communicate with the docker doemon
when a client runs any server cmd on the docker client terminal, the client terminal sends these docker cmd to the docker daemon
It is possible for docker client to communicate with more than one damon

Docker Host
Docker host is used to provide an envionment to execute and run applications. it contains the docker daemon. images, containers, networks and storage

Docker Hub/Registry
Public
Private

Docker Images
Docker images are the read only binary templates used to create docker containers

OR

Single file with all dependencies and configuration required to run a program

Ways to create an images
1. Take Image from docker hub
2. create image from docker file.
3. create image from existing container

Docker container
contains hold the entire packages that is needed to run the application
 

Or

In other words, we can say that, the image is a template and the container is a copy of that template
Container is like a virtual machine
images becomes container when they run on docker engine.

yum install docker -y

To see all images present in your local
docker images

To find out images in docker hub
docker search jenkins

To downloads image from dockerhub to local machine
docker pull jenkins

To give name to container
docker run -it --name bhupender ubuntu /bin/bash    run (create and start) -it (interactive mode / terminal)

To check, service is start or not
service docker status

To start container
docker start bhupender

To go inside container
docker attach bhupender

To see all containers
docker ps -a

To see only running containers
docker ps

To stop container
docker stop bhupinder

To delete container
docker rm bhupinder

To delete image
docker rmi centos7

rename container name
docker container rename c7fdscsd webserver

To show RAM of container
docker container stats

docker container top 50ef6499052d
UID                 PID                 PPID                C                   STIME               TTY                 TIME                CMD
root                27777               27755               0                   20:04               pts/0               00:00:00            /bin/bash

exit from container without stop or exit state of container

ctrl+q

docker container stats
CONTAINER ID   NAME            CPU %     MEM USAGE / LIMIT     MEM %     NET I/O       BLOCK I/O     PIDS
50ef6499052d   musing_austin   0.00%     2.043MiB / 7.561GiB   0.03%     2.49kB / 0B   10.4MB / 0B   1

Whereas in docker exec command you can specify which shell you want to enter into. It will not take you to PID 1 of the container. It will create a new process for bash. docker exec -it < container-id > bash. Exiting out from the container will not stop the container.

Create container
docker run -it --name bhupicontainer centos /bin/bash
cd tmp/

Now create one file inside this tmp directory
touch myfile

now if you want to see the difference b/w the base image & changes on it then
docker diff bhupicontainer

C /root
A /root/.bash_history
C /tmp
A /tmp/myfile

Now create image of this container
docker commit newcontainer updateimage

docker images


Now create container from this image
docker run -it --name rajcontainer updateimage /bin/bash

ls
cd /tmp
ls
myfile

creating image from dockerfile
dockerfile is basically a text file, it contains some set of instruction
Automation of docker image creation

Docker components
FROM -
For base image this cmd must be on top of the dockerfile.

RUN -
To execute cmd, it will create a layer in image.

MAINTAINER -
Author / Owner / Description

COPY -
copy files from local system (docker vm), we need to provide route, destination, we cant download file from internet and only remote repo.

ADD -
Similar to copy but, it provides files from internet, also we extract file at docker image side.

EXPOSE -
To expose ports, such as port 8080 for tomcat, port 80 for nginx etc.

WORKDIR -
To set working directory for a container.

CMD
Execute cmd but during container creation

COPYPOINT -
Similar to cmd, but has higher priority over cmd, 1st cmd will be executed by ENTRYPOINT Only.

ENV -
Environment variables

Dockerfile
1. create a file named dockerfile
2. add instruction in dockerfile
3. Build dockerfile to create image
4. Run image to create container


vi Dockerfile-----------------------
FROM ubuntu
RUN echo "Technically giftgu" > /tmp/testfile
------------------------------------


To create image out of dockerfile
Docker build -t myimg
Docker ps -a
docker images

Now create container from the above image
docker run -it --name mycontainer myimg /bin/bash
cat /tmp/testfile

................................................
vi Dockerfile
FROM centos
WORKDIR /tmp
RUN echo "Welcome to nonstop step" > /tmp/testfile
ENV myname manjeetyadav
COPY testfile1 /tmp
ADD test.tar.gz /tmp

.................................

Volume
Volume is simply a directory inside our container
1stly, we have to declare the directory as a volume and then share volume
Even if we stop container, still we can access volume
Volume will be created in one container
You cant create volume from existing container
You can share one volume across any number of containers
Volume will not be installed when you update an image
You can mapped volume in two ways
Container - container
Host - container 


Benifits of volume
Decoupling container from storage
Share volume among different container
Attach volume to containers
On deleting container volume does not delete

..........................
Creating file from Dockerfile
create a Dockerfile and write
FROM centos
VOLUME ["/myvolume"]

Then create image from this dockerfile
docker build -t myimage .

Now create a container from the image & Run
docker run -it --name container1 myimage /bin/bash

Now do ls, you can see myvolume1

Now, share volume with another container
Container1 -- Container2

docker run -it --name container2(new) --privileged=true --volumes-from container1 centos /bin/bash

Now after creating container2, myvolumes is visible whatever you do in one volume, can see from other volume
touch /myvolume1/samplefile
docker start container1
docker attach container1
ls /myvolume1
you can see samplefile

.....................................
Now try to create volume by using cmd
docker run -it --name container3 -v /volume2 centos /bin/bash

do ls - cd /volume2

Now create one file cont3file and exit
Now create one more container, and share volume

docker run -it --name container4 --privileged=true  --volumes-from container3 centos /bin/bash

Now you are inside container, do ls you can see volume2

Now create one file inside this volume and then check in container3, you can see that file


...........................................................
Volume (Host - container)
docker files in /home/ec2-user
docker run -it --name hostcount -v /home/ec2-user:/rajput --privileged=true centos /bin/bash

cd /rajput
do ls, now you can see all files of host machine

touch rajputfile     (in container)
exit

Now check in EC2 machine, you can see this file

.......................
Some other cmd for volume

docker volume ls
docker volume create <Volume name>
docker volume rm <volume name>
docker volume prune    {it removed all unused docker volume}
docker volume inspect <volume name>
docker container inspect <container name>

.................................
Expose (port expose)

docker run -td --name techserver -p 80:80 centos    (d means daemon, it will run the docker container, but we not go inside it)

diff in EXPOSE cmd and -p     if do EXPOSE cmd, then container can communicate to another container, cant go to host machine.
-p means expose for host and outside also

docker ps
docker port techserver            it will show all port of the container

docker exec -it techserver /bin/bash    to go inside the container
yum  install httpd -y
cd /var/www/html
echo "Subscribe Tech Guftugu" > index.html
systemctl start httpd


................................................
docker run -td --name myjenkins -p 8080:8080 jenkins

................................


diff b/w docker attach and docker exec?
Docker exec creates a new process in the container's environment while docker attach just connect the standard input/output of the main process inside the container to corresponding standard input/output error of current terminal

docker exec is specifically for running new things in a already started container, be it a shell or some other process.

what is the diff b/w expose and publish a docker?
basically you have three options
1. neither specify expose nor -p
2. only specify expose
3. specify expose and -p

1. if you specify neither expose nor -p, the service in the container will only be accessible from inside the container itself
2. if you expose a port, this service in the container is not accessible from outside docker, but from inside other docker containers, do this is good for inter-container communication.

3.if you expose and -p a port, the service in the container is accessible from anywhere, even outside docker.

................................................

upload container image to docker hub - pull/push

docker run -it ubuntu /bin/bash
Now create some files inside container
Now create image of this container

docker commit container1 image1

Now create account in hub docker.com

Now on your instance
docker login

enter username/password

Now give tag to your image

docker tag image1 docker id/newimage    (newimage any name)
docker tag image1 manjeetyadav19/project1

docker push dockerid/newimage
docker push manjeetyadav19/project1

Now you can see this image in docker hub account

Now create one instance in tokyo region and pull image from hub

docker pull manjeetyadav19/project1
docker run -it --name mycontainer manjeetyadav19/project1 /bin/bash

...............................................
some important cmd

stop all running containers : docker stop $(docker ps -a -q)

delete all stopped contains : docker rm $(docker ps -a -q)

delete all images : docker rmi -f $(docker images -q)



Step 1 — Creating an Independent Volume


docker volume create --name DataVolume1

docker run -ti --name=Container1 -v DataVolume1:/datavolume1 ubuntu

echo "Share this file between containers" > /datavolume1/Example.txt

check in shared volume

We can verify that the volume is present on our system with docker volume inspect:
docker volume inspect DataVolume1

Step 2 — Creating a Volume that Persists when the Container is Removed

docker run -ti --name=Container2 -v DataVolume2:/datavolume2 ubuntu

When we restart the container, the volume will mount automatically:
docker start -ai Container2

Docker won’t let us remove a volume if it’s referenced by a container. Let’s see what happens when we try:
docker volume rm DataVolume2

Removing the container won’t affect the volume. We can see it’s still present on the system by listing the volumes with docker volume ls:
docker volume ls

And we can use docker volume rm to remove it:
docker volume rm DataVolume2

Step 3 — Creating a Volume from an Existing Directory with Data

As an example, we’ll create a container and add the data volume at /var, a directory which contains data in the base image:
docker run -ti --rm -v DataVolume3:/var ubuntu


All the content from the base image’s /var directory is copied into the volume, and we can mount that volume in a new container.

This time, rather than relying on the base image’s default bash command, we’ll issue our own ls command, which will show the contents of the volume without entering the shell:
docker run --rm -v DataVolume3:/datavolume3 ubuntu ls datavolume3


Step 4 — Sharing Data Between Multiple Docker Containers

Create Container4 and DataVolume4

Use docker run to create a new container named Container4 with a data volume attached:
docker run -ti --name=Container4 -v DataVolume4:/datavolume4 ubuntu

Create Container5 and Mount Volumes from Container4
docker run -ti --name=Container5 --volumes-from Container4 ubuntu

Start Container 6 and Mount the Volume Read-Only
docker run -ti --name=Container6 --volumes-from Container4:ro ubuntu

Now that we’re done, let’s clean up our containers and volume:
docker rm Container4 Container5 Container6
docker volume rm DataVolume4

   

Overlay Networks

An overlay network uses software virtualization to create additional layers of network abstraction running on top of a physical network. In Docker, an overlay network driver is used for multi-host network communication. This driver utilizes Virtual Extensible LAN (VXLAN) technology which provide portability between cloud, on-premise and virtual environments. VXLAN solves common portability limitations by extending layer 2 subnets across layer 3 network boundaries, hence containers can run on foreign IP subnets.


docker network create -d overlay --subnet=192.168.10.0/24 my-overlay-net


Macvlan Networks

The macvlan driver is used to connect Docker containers directly to the host network interfaces through layer 2 segmentation. No use of port mapping or network address translation (NAT) is needed and containers can be assigned a public IP address which is accessible from the outside world. Latency in macvlan networks is low since packets are routed directly from Docker host network interface controller (NIC) to the containers.

Note that macvlan has to be configured per host, and has support for physical NIC, sub-interface, network bonded interfaces and even teamed interfaces. Traffic is explicitly filtered by the host kernel modules for isolation and security. To create a macvlan network named macvlan-net, you’ll need to provide a --gateway parameter to specify the IP address of the gateway for the subnet, and a -o parameter to set driver specific options. In this example, the parent interface is set to eth0 interface on the host:

docker network create -d macvlan \
--subnet=192.168.40.0/24 \
--gateway=192.168.40.1 \
-o parent=eth0 my-macvlan-net

 

Docker Machine 

 these two broad use cases.

I have an older desktop system and want to run Docker on Mac or Windows
I want to provision Docker hosts on remote systems

 

what is cname
and c group in linux

Friday, July 23, 2021

Chef - Configuratin Managment Tool

chef - configuration management tool
Pull based - chef
Push based - Ansible

IAC - Infrastructural as a code
lang - ruby

Architecture
workstation (cookbook(recipe)) --knife-- chef server --knife--  node 1, node2(ohai)(chef client)

bootstrap - chef server connect to the node, that process known as bootstrap
ohai - current configuration, or maintain current state info. of node
chef client - Tool runs on every chef node to pull code from chef server (convergence)
knife - is a cmd line tool, that establish communication among workstation, server and node
Idem-potency - tracking the state of system resources to ensure that the changes should not reapply Repeatedly.


How to install chef on workstation
www.chef.io      download chef workstation
now to linux machine copy url and do
weget  <url>
yum install chef...
which chef
check --version

cookbook
cookbook is collection of recipes and some other file and folders
checfignore - like .gitignore
kitchen yml - for testing cookbook
metadata.rb - name version author etc of the cookbook
readme.md - information about usage of cookbook

recipe - where you write code

mkdir cookbooks
cd cookbooks
chef generate cookbook test-cookbook

cd test-cookbook

chef generate recipe test-recipe.rb
to check use tree cmd

cd ..

vi test-cookbook/recipes/test-recipe.rb


create on recipe
file '/home/ec2-user/myfile' do
content 'welcome to nonstopstep'
action :create
end


chef spec ruby -c test-cookbook/recipes/test-recipe.rb        to check if code is ok

chef-client -zr "recipe[test-cookbook::test-recipe]"        z meaning for local machine, r mean run list

2nd recipe
cd test-cookbook
chef generate recipe recipe2
cd ..
vi test-cookbook/recipes/recipe2.rb----------------------------------
package 'tree' do
action :install
end

file '/home/ec2-user/myfile2' do
content 'second Project code'
action :create
end
----------------------------------

chef-client -zr "recipe[test-cookbook::recipe2]"
cat /myfile2
yum remove tree -y
chef

recipe 3 deploying an apache server

chef generate  cookbook apache-cookbook
cd apache-cookbook/
chef generate recipe apache-recipe
tree
cd ..
ls

vi apache-cookbook/recipes/apache-recipe.rb--------------------
package 'httpd' do
action :install
end

file '/var/www/html/index.html' do
content 'welcome to nostopstep'
action :create
end

service 'httpd' do
action [:enable, :start]
end
----------------------------------

chef exec ruby -c apache-cookbook/recipes/apache-recipe.rb
chef-client -zr "recipe[apache-cookbook::apache-recipe]"

Resource: it is the basic component of a recipe used to manage the infrastructure with diff kind of states there can be multiple resources in a recipe, which wil help in configuration and managing the infrastructure
eg
Package: tree, httpd
Service: enable, disable, start, stop, status
user: manage the user, create user.
group: create group
template: manage file with embedded ruby template
cookbook-file: transfers the file from the files sub-directory in the cookbook to a location on the node
file: manage the content of a file on the node.
execute: execute a cmd on the node
cron: edit an existing cron file on the node
directory: manages the directory on the node


type of attribute
default
force-default
normal
override
force-override
automatic

if written 2 code same to same, bt 1 is default and another 1 is force-default, so priority of force-default is higher

where we can define attri
Node
cookbook
roles
environments
recipes

Login into amazon linux machine
sudo su
ohai
ohai ipaddress
ohai memory/total
ohai cpu/0/mhz

recipe 3
cd apache-cookbook
chef generate recipe recipe3
cd ..
vi apache-cookbook/recipe/recipe3.rb----------------------------------
file '/home/ec2-user/basicinfo' do
content "This is to get attribute
HOSTNAME: #{node['hostname']}
IPADDRESS: #{node['ipaddress']}
CPU: #{node['cpu']['0']['mhz']}
MEMORY: #{node['memory']['total']}"
action :create
end
----------------------------------

How to execute linux cmd in b/w the recipe

vi test-cookbook/recipe/test-recipe1.rb----------------------------------
execute "run a script" do
command <<-EOH                
mkdir /home/ec2-user/manjeet
touch /home/ec2-user/manjeet/file1
EOH
end
----------------------------------
EOH means end of here of ruby script


vi test-cookbook/recipe/test-recipe2.rb----------------------------------
user "rajput" do
action :create
end
----------------------------------

vi test-cookbook/recipe/test-recipe3.rb----------------------------------
group "common" do
action :create
members 'rajput'
append true
end
----------------------------------
append true     means add in previous one, don't override

what is runlist?, run multiple cookbook recipe in one go
To run the recipes in a sequence order that we mention in a run list
with the process, we can run multiple recipies, but the condition is, there must be only one recipe from one cookbook

chef-client -zr "recipe[test-cookbook::test-recipe],recipe[apache-cookbook::apache-recipe]"

How to include recipe
vi test-cookbook/recipes/default.rb----------------------------------
include_recipe "test-cookbook::test_recipe"
include_recipe "test-cookbook::recipe2"
----------------------------------

chef-client -zr "recipe[test-cookbook::default]"

combine multiple default
chef-client -zr "recipe[test-cookbook::default],recipe[apache-cookbook::default]"
OR
chef-client -zr "recipe[test-cookbook],recipe[apache-cookbook]"


.............................
chef server is going to be a mediator for the code or cookbook

workstation --chef server -- nodes
first create account on chef server
then attach your workstation to chef-server
Now upload your cookbook from workstation to chef- server
apply cookbooks from chef-server to node

search chef.io - create account
go to chef account - check on organization - start kit - download starter kit
open the download content - unzip - chef- repo

cp chef-repo/
ls -a
cd chef/
ls
knife.rb
cat knife.rb
you will get url of chef server
 

scp -i "AWSSingaporeKey.pem" -rP 22 /root/Downloads/chef-repo/ ec2-user@ec2-18-139-222-22.ap-southeast-1.compute.amazonaws.com:/home/ec2-user

knife ssl check        to check if you are connected with chef server or not?

..........................
Bootstrap a Node - Attaching a node to the chef server
during bootstrap chef repo package will copy on nodes, and it will node will also connect to the chef server

How to connect a node the chef server
create nodes in same AZ

Now go to workstation
knife bootstrap node_ip --ssh-user ec2-user --sudo -i node-key.pem -N node1    (node-key.pem in chef-repo folder)
knife bootstrap 172.31.29.224 --ssh-user ec2-user --sudo -i AWSSingaporeKey.pem -N node2
knife bootstrap 172.31.21.197 --ssh-user ec2-user --sudo -i AWSSingaporeKey.pem -N node1

knife node list         to check connected node list

then node will visible on chef UI server also

there is 2 cookbook 1 that created on root or home folder of user
 /home/ec2-user/cookbooks/apache-cookbook/
/home/ec2-user/chef-repo/cookbooks        and another 1 is in chef-repo folder that is downloaded from the chef server

copy folder from the home folder to chef-repo
pwd
/home/ec2-user/chef-repo/cookbooks
mv /home/ec2-user/cookbooks/apache-cookbook/ cookbooks/
mv /home/ec2-user/cookbooks/test-cookbook/ cookbooks/

rm -rf /home/ec2-user/cookbooks/        delete old

.........................................................
Now We have to upload apache-cookbook
knife cookbook upload apache-cookbook

Now to check whether cookbook is uploaded apache-cookbook
knife cookbook list

Now we will attach the recipe, which we would like to run on node
knife node run_list set node1 "recipe[apache-cookbook::apache-recipe]"

knife node show node1                    to check attached recipe
Runlist recipe[apache-cookbook::apache-recipe]

..................................................

Now take access of node1 with the help of ssh
go to chef-client
Now all files would be updated, go to browser, paste public ip of the node, you will get webpage

...........................................................
go to workstation again
vi cookbook/apache-cookbook/recipes/apache-recipe.rb
Now change some content &  
test 2 note added

go again on node1 and run
chef-client

...........................
now, we do not want to call chef-client everytime
we want to automate this process
go to node1

vi /etc/crontab
* * * * * root chef-client

now go to chef-workstation
make some changes
vi cookbook/apache-cookbook/recipes/apache-recipe.rb
test 3 note added

uplaoad on the chef-server
knife cookbook upload apache-cookbook

...............................................

Now create one more node node2
Advance details
#!/bin/bash
sudo su
yum update -y
echo "* * * * * root chef-client" >> /etc/crontab

Now go to workstation and run bootstrap cmd
Now attach cookbook to node run list
Now check in browser node2 shows webpage

.....................................................

To see list of cookbook which are present in chef-server
knife cookbook list

To delete cookbook from chef-server
Knife cookbook delete <cookbook name> -y

To see list of nodes which are present in chef server
knife node list

To delete nodes from chef-server
knife node delete <node name> -y

To see list of clients which are present in chef-server
knife client delete <client name> -y

To see list of roles which are present in chef-server
knife role list

To delete roles from chef-server
knife role delete <role name> -y


Roles------------------------
cd roles

vi devops.rb
name "devops"
description "Web server role"
run_list "recipe[apache-cookbook::apache-recipe]"

Now comeback to chef-repo
Now upload the role on chef-server
knife role from file roles/devops.rb

If you want to see the role created
knife roles list

Now create two instances as node1 & node2 is same AZ as of workstation
Now bootstrap the node

knife bootstrap <private-ip> --ssh-user ec2-user --sudo -i AWSSingaporeKey.pem -N node1
knife bootstrap <private-ip> --ssh-user ec2-user --sudo -i AWSSingaporeKey.pem -N node2
..................................................................

Now connects these nodes to role
knife role list
knife node run_list set node1 "role[devops]"
knife node run_list set node2 "role[devops]"

knife node show node1 / 2

knife cookbook upload apache-cookbook

Now check public-ip of any node in browser

cat cookbook/apache-cookbook/recipes/recipe1.rb

vi roles/devops.rb
name "devops"
description "web server role"
run_list "recipe[apache-cookbook::recipe1]"

knife role from file roles/devops.rb
Now, take access of any node via ssh & check

Now again go to workstation
vi roles/devops.rb
name "devops"
description "web server role"
run_list "recipe[apache-cookbook]"

knife role from file roles/devops.rb
knife cookbook upload apache-cookbook

vi roles devops.rb
name "devops"
description "web server role"
run_list "recipe[apache-cookbook]","recipe[test-cookbook]"

Now upload this role to server

knife role from file roles/devops.rb

knife cookbook upload test-cookbook

vi cookbook/test-cookbook/recipes/test-recipe6.rb
%w (httpd mariadb-server unzip git vim)
.each do |p|

package p do
action :install
end
end

knife cookbook upload test-cookbook

Now go inside any node search git, if you will get git inside node, it means works properly

Git - Github

 Git Github 

scenario A and B working in 1 company A in India B in Singapore different timezone there. so

install git on 2 machine A and B
yum install git

git config --global user.name "ajay"
git config --global user.email "ajayyadav19@gmail.com"
git config --list

which git
git --version


create git account on website
https://github.com

go to 1 machine
create direcotry
mkdir mumgit
cd mumgit
git init


create file
cat > mumbai1
git status                    check status, check if code in working space or commited
git add .                    add to staging
git commit -m "My First commit from mumbai"    commited, changed got saved, and versioning started


git status                    now working are is empty
git                             it will show commit id
git show 8512df9c8bd                it will show committed id status

connect system git to github
git remote add origin https://github.com/manjeetyadav19/repo.git
git push origin master                             push code to github    


in another machine add remote origin and then pull data
git pull origin master                    pull data from repo


To ignore the file while committing
vi .gitignore
*.css
*.java

git add .gitignore
git commit -m "latest update exclude .css"


To check branches
git branch        to check branches
git branch man        to create branch
git checkout man    to switch branch
git log --oneline    will get all commits in one liner


To can't merge branch of diff repositories

We use pulling mechanism to merge branches
git merge man                    to merge branch


Stashing
git stash            to put data in stash
git stash list            to see stashed items list
git stash apply stash@{3}
then you can add and commit

To clear the stash items
git stash clear            clear all stash file

Git reset
to reset staging area
git reset <filname>        reset only one file
git reset.            reset all file in staging area, remove all file from staging area

git reset --hard        it will remove the code/file from the workspace and staging both area.

git revert
git revert help you to undo existing commit
git revert <command-id>

how to delete untracked files
git clear -n            dry run
git clear -f            delete forcefully

Tags
git tag -a <tagname> -m <msg> <commit-ID>

To see
git tag

To see particular commit content by using tag
git show  <tag name>

To delete a tag
git tag -d <tagname>

cloning
git clone https://github.com/manjeetyadav19/repo.git        cloning from github to local system

Question & Answers

how to remove git commit?

git log        check commit
git reset --hard 3432dsfdsf   

how to remove git commit from git repository also?
git push origin +master

what is difference b/w git fetch and pull?
git pull     -git pull - fetch and merge both.
git fetch    -git fetch but when you see changes, you will not see changes,  so for changes, need to merge 1st.
git merge    - to merge the code with branch

how to role back from commit?
in case if you just commited, or did not push to repository, you can do amend.
git commit --am

how to change your branch name?
git co -b chat1
git br -m chat1 chat

how to clean UN-traced file in git?
git clean -n        it will show the file that can clean
git clean -f        if you really want to delete unwanted files

if you committed 3-4 file in master, how you move to other branch?
git reset HEAD-3

how do i push branch to git repo?
git push -u origin chat

how to remove already pushed file to git?
git rm --cached chat.rb

also ignore from the system,
create a file .gitignore
and put a file there

How to combine 3-4 commits in 1?
git rebase -i HEAD-3
it will show you a file
pick 43223h added 4th line
pick 323j24 added 5th line
pick 343244 added 6th line

change below:
pick 43223h added 4th line
squash 323j24 added 5th line
squash 343244 added 6th line

how do you see how many commit you combined?
git diff-tree no-commit-id --name-only -r 7bcdss4324


how to merge a code of some-other branch in your branch?
git cherry-pick -x dsfsdf8f9dsf        you can cherry pick in another branch


Fork

copy into your repo from any other repo.





AWS interview question and answer.

 should aware of compute

What are the services you have worked on AWS?

what are compute services in AWS?
EC2
Elastic container service    - docker
EKS                 - Kubernetes
Lambda                - server less computing(developer)
batch                - if you have any service that can process anytime, if you dont want to running your service 24*7, you can stop the service, as soon as you upload the data, it will completed with in the time.
Elastic Beanstalk


What are the storage services in AWS?
S3
EFS
Glacier
Storage Gateway


what are the type of visualization do we have on AWS platform?
hvm
parvirtual

1)
HVM AMIs are required to take advantage of enhanced networking and GPU processing
2)
Historically, PV guests had better performance than HVM guests in many cases, but because of enhancements in HVM virtualization and the availability of PV drivers for HVM AMIs, this is no longer true.


What is the difference between using the local instance store and Amazon Elastic Block store(Amazon EBS) for the root device?
what are they type of root device?
ebs - block storage - peristence - not delete on reboot
instance-store - in-persistence - can be deleted on reboot





what are type of EBS Volumes and the use cases?
EBS Volume Types
General purpose SSD (gp2)
Provisioned IOPS SSD (io1)
Throughput Optimized HDD (st1)
Cold HDD (sc1)
Magnetic Standard

EBS Volume Categories
Amazon EBS provides a range of options that allow you to optimize storage performance and cost for your workload. These options are divided into two major categories:

SSD-backed Volumes (IOPS-intensive) - we can boot from that  - 2 types - (gp2),(io1)
SSD-backed storage for transactional workloads, such as databases and boot volumes (performance depends primarily on IOPS)

HDD-backed Volumes (MB/s-intensive) - we cant boot from that - 3 types - (st1),(sc1), Magnetic Standard(we can boot on magnetic)
HDD-backed storage for throughput intensive workloads, such as Map Reduce and log processing (performance depends primarily MB/s).

gp2 - upto 10000 IOPS
gp2 is default EBS volume type for the Amazon EC2 instances.
gp2 Volumes are backed by solid-state drives(SSDs)
General purpose, balances both price and performance.
Ratio of 3 IOPS per GB with up to 10,000 IOPS
Boot volume, low latency interactive apps, Dev and test
Volume size : 1 GB-16 TB
Max IOPS/Volume: 10,000
Price: $0.10/GB-month

Provisioned IOPS SSD (io1) - upto 32000 IOPS - both IOPS-intensive and throughput-intensive
These volumes are ideal for both IOPS-intensive and throughput-intensive workloads that require extremely low latency.
Designed for I/O intensive applications such as large relational or NoSQL databases.
Use if you need more than 10000 IOPS.
Can provision up to 32000 IOPS per volume
Volume size: 4GB - 16 TB
Price: $0.125/GB-month

Throughput Optimized HDD(st1) - Throughput intensive -
st1 is backed by hard disk drives (HDDS) and is ideal for frequently accessed, throughput intensive workloads with large datasets
st1 volumes deliver performance in term of throughput, measured in MB/s
Big data
Data warehouses
Log processing
Cannot be a boot volume
Can provision up to 500 IOPS per volume
Volume size : 500 GB - 16 TB
Price : $ 0.045/GB-month

Cold HDD(sc1)
sc1 is also backed by hard disk drives (HDDs) and provides the lowest cost per GB of all EBS volume types
Lowest cost storage for infrequent accessed workloads
file server
cannot be a boot volume
volume size : 500 GB - 16TB
Price : $0.025/GB-month

Megnetic (Standard)
Lowest cost per gigabyte of all EBS volume types that is bootable.
Magnetic volumes are ideal for workloads where data is accessed infrequently, and applications where the lowest storage cost is important.




what is the difference between t2.micro and t3.micro?
RAM and processor combination

why do we attach IAM role with EC2 machine while creating it?
user case we need to attach S3 bucket, so possible from some user and user/pass required, so we can directly assign S3 bucket IAM role

what are the usages of TAGs with EC2 / AWS resources?
tags for identify team, name, purpose, cost-effectivness

what is T2/T3 unlimited option?
if go to monitor in t2 or t3 there 2 monitor cpu credit usage & credit balance, so if these, credit are spend competly, so your machine start performing lower. or need to pay extra.

what are type of hypervisors in AWS?
nitro
zen

how can we recover lost ec2 ssh key?
you have root user, and you have additional sudo access, you can update the key for that particular ec2
or shut down the machine mount the root volume to another ec2 and update the ssh key there

how to check shared AMIs?
go to ami - check share ami



How do i run systems in the Amazon EC2 env?
EC2 Dashboard- region - AMI - templete - root partion - vpc /subnet - AMI role - tag - security group - lunch


How many instances can i run in amazon EC2?
Go -EC2 - Limits - in your region -

how quickly can i scale my capacity both up and down?
Auto scalling - Auto scalling groups - based n scalling polices - how shown you want to add or down scaling

how can i request a limit increase?
support center - create case - service limit increase - limit type - like EC2 intances - region - New limit value

how can i check what i am charged for?
billing & cost management dasboard - bill details -
bill details by account - in case mutiple accoun associated - when you have multi 0U

what is the difference b/w c4 and c5 class instances?
EC2 - instance types - c4.larg, c5 large
c4.large - RAM - 3840, Network perf - Moderate    , cost - $0.1 h
c5.large - RAM - 4096, Network Perf - Up to 10G , cost - $0.085h

also you can check other parameter select from settings - check , uncheck options

what are the load balancers you have worked with?
Application(http/https)
Network(TCP/TLS/UDP)
Classic

what are the security options available to secure ec2 instance?
security group - who and from where to access on what port to your ec2 machine
Network ACL - by default configured with your VPC Only - VPC - Security - Network ACLs
In network ACL - you can apply on multiple subnets - Inbound rule, outbound rule, - allow, deny, and Subnet association

will i lose the metrics data for a terminated amazon ec2 instance or a deleted elastic load balancer?
No, by default AWS hold matrics for 14 days


can i access the metrics data for a terminated amazon ec2 instance or a deleted elastic load balancer?
No, by default AWS hold matrics for 14 days

what is the difference between hibernate and stop?
hibernate - situation where your machine, holds the data on RAM, it can start from the same position.
stop - RAM data flush, and need to reload the each and every file.

can i enable hibernation on an existing instance?
No - need to select while creating instance

what is a convertible RI?
RI - reseved instance - if you purchased C4 and you want to change with chip one C5, you can convert it.

what are load balancing algorithm with elb?
EC2- Load balanceing - Target group - edit attribute - Deregistration dely, slow start duration, load algo - round robin, least outstanding

how to take backup of your machine?
take snapshots
or take back up of your volume

How to restore from volume?
create EC2 with volume
Or create a AMI with volume backup


When need to create AMI?
when you scalling up your infra, or auto scalling implementation going to add, create AMI that is ready to use in your prod.

Can i scale my EC2 machine?
yes,
vertically - upgrade - RAM, CPU etc
horizontally - if you have cluster- 5 machine there, add 2-3 machine more

How i can mantain state of my infra?
you can maintain state of your infra by using terrform

what are security group?
these are firwall rule, after that still you need to improve security, you can add ACL on top, where you define what ip you allow to login in your env and what ip should blocked.

How you migrate to another region?
Go to volumes - create snapshot - Name it - go to snapshots - migrate to another region

Can i use same key pair for multiple region, No- but if you want you can import key to other region then you can use.

What is the size of EBS volume?
16TB


What is shared AMI?
developer can share there AMI to other developer

what is Amazon S3?
object storage
100 buckets can create - per account - its soft limit - you can increase limit

what is difference between object storage and block storage?
S3- object storage- cant attach with EC2 - or not suggested - we can't use rsync cmd.
EBS - block stroage - can attach with EC2 - peristance -  we can't use rsync cmd.

how much data can i store in amazon S3?
256TB

what storage classes does amazon S3 offer?
Standard
Intellingent-Tiering
Standard-IA
one zone-IA
Glacier
Glacier Deep Archive
Reduced Redundancy

how reliable is amazon s3?
Very reliable, even amazon taking back in S3

what is provisioned capacity unit(PCU) and when should it use PCU?(150MB/s)
in S3 PCU can provide by AWS if you have provisioned (cost per month $100 extra)


S3 is a global service!! why do i need to select a region while creating S3 bucket?
complience, latency, near to our location or region, disaster case in other region should have a copy.

How do i decide which AWS Region to store my data in?
complience, latency, near to our location or region, disaster case in other region should have a copy.

how will i be charged and billed for my use of amazon S3?
when you are using S3, you have 1 billion of file and file size is 10kb-2kb, and you are changing storage class, it will 10 times costilier, AWS will change on the bases of API call you are making.

how am i charged for using versioning?
1gb file, and you have 10 version, so you are paying for 10gb

how durable is amazon S3?
99.999%

what checksums does amazon S3 employ to detect data corruption?
(MD5 checksums & cyclic redendancy checks)?


what is versioning?
uploading file, 10 times, 10 version there, like git

what is DR APR RPO?

what the EBS services types?

Security service in AWS

centralized backup?

how to connect 2 vpc in different domain?

Thursday, July 22, 2021

Linux interview question and answers.

 Linux

How Will you change default user id value in linux?
id manjeet
vim /etc/login.defs
UID_MIN-

cmd
groupmod -g 600 group01
usermod -u 900 -g 600 user01

Add user in group

usermod -aG groupname username
Remove user from group 
gpasswd -d username groupname 
Delete group
groupdel groupname 



root# rm -rf /tmp/test gives error operation not permitted. Reason?
chattr +i /tmp/test     chattr applied


/etc/hosts (which RPM is reposible for creating this file).
rpm -qf /etc/hosts

what is difference b/w RPM and YUM?
RPM -redhat
rpm -qPR httpd-2.4.6-90.e17.centos.x86_64.rpm

yum install all s/w with dependency


what is difference b/w Hard and Soft Link?
ll -i /root     display inode
ln /tmp/test /etc/lokendra     it do hard link and inode will be same
hard link create only the same disk

ln -s /tmp/test /etc/lokendra    it will create soft link create and it pointing to inode
soft link can create accross the file system

What is sticky bit?
sticky bit implement on the folder or directory, not on file
drwxrwxrwt    prevent to delete unwanted deletion, the owner of a file or the root can delete
chmod 1777 dir
chmod +t dir

How will you check open ports on Linux Server?
netstat -tunlp


How Will you check open ports on remote servers (without login).
nmap

Your site is throwing 500 error, how will you start troubleshooting?
server side issue, like webserver not responding etc

How will you start troubleshooting if your site is down?
server side issue, like webserver not responding etc

How will you create space on disk if it is showing 100% used?
df -Th        Disk space
du -sh *     Check file folder space

What is package of sar cmd and what does it do?
vi /etc/sysconfig/sysstat


What is Swap Space?
Part of disk space can use as RAM for temp hold some programs
less than 2 GB        2 times the amount of RAM        3 times the amount of RAM

2 GB - 8 GB        Equal to the amount of RAM        2 times the amount of RAM

8 GB - 64 GB        0.5 times the amount of RAM        1.5 times the amount of RAM

more than 64 GB        workload dependent            hibernation not recommended

What are the basic components of linux?
Kernal - core component of OS that manage operations and H/W
shell - interpreter which is used to execute cmd
GUI -
System utilities - allows user to manage computer
Application Programs - s/w program or set of fuctions designed to accomplish a specfic task.

How do Enable/disable Eth Device
vi /etc/sysconfig/network-scripts/devicename
For enable ONBOOT = yes
For disable ONBOOT = no

What are the process states in linux?
Ready: created ready to run
Running Being executed
Blocked or wait: Process waiting for input from the user
Terminated or Completed: Process completed execution, or term by OS
Zombie: Process termin, but info still exist in process table
child dies before the parents, in this case structural info still in process table,
Can finish when parent dies, it can clear by kernel.

how to clear zombie process?



Explain each system calls used for process management in linux?
system calls

File related- open(), read(), write(), close(), create()
Device Related- read, write, repostion, ioctl,
Information- getpid, attributes, get system time
Process Control- Load, execute, abort, fork, wait, signa, allocate etc.
Communication- pipe()

Fork() - used to create a new process
Exec() - execute a new program
Wait() - wait until the process finishes execution
Exit() - exit from the process
Getpid() - get the unique process id of the process
Getppid() - get the parent process unique id
Nice() - to bias the existing property of process

what is env variable?
In Linux and Unix based systems environment variables are a set of dynamic named values, stored within the system that are used by applications launched in shells or subshells. In simple words, an environment variable is a variable with a name and an associated value.


There are several commands available that allow you to list and set environment variables in Linux:

$ printenv //displays all the global ENVs
or
$ set //display all the ENVs(global as well as local)
or
$ env //display all the global ENVs

To set a global ENV
$ export NAME=Value
or
$ set NAME=Value

To set user wide ENVs

These variable are set and configured in ~/.bashrc, ~/.bash_profile, ~/.bash_login, ~/.profile
Step 1: Open the terminal.
Step 2:

$ sudo vi ~/.bashrc

Step 3:Enter password.
Step 4:Add variable in the file opened.

export NAME=Value

Step 5: Save and close the file.
Step 6:

$ source ~/.bashrc


To set system wide ENVs
These variable are set and configured in /etc/environment, /etc/profile, /etc/profile.d/, /etc/bash.bashrc files according to the requirement.

Step 1: Open the terminal.
Step 2:

$ sudo -H vi /etc/environment

Step 3:Enter password.
Step 4:Add variable in the file opened.

NAME=Value

Step 5: Save and close the file.
Step 6: Logout and Login again.

Some commonly used ENVs in Linux

$USER: Gives current user's name.
$PATH: Gives search path for commands.
$PWD: Gives the path of present working directory.
$HOME: Gives path of home directory.
$HOSTNAME: Gives name of the host.
$LANG: Gives the default system language.
$EDITOR: Gives default file editor.
$UID: Gives user ID of current user.
$SHELL: Gives location of current user's shell program.


How do you find out all processes that are currently running?

ps -f

How do you find out the processes that are currently running or partcular user?

ps -aux localmanjeet

what is file path of network config?
/etc/sysconfig/network-script

what is file path of DNS config?
/etc/resolv.conf

how to update locate db?
cd /var/lib/mlocate
updatedb
vi /etc/updatedb.conf

what is boot process in linux?

BIOS - So, in simple terms BIOS loads and executes the MBR boot loader.

MBR -  1st sector bootable disk,  
MBR is less than 512 bytes in size,
1) primary boot loader info in 1st 446 bytes
2) partition table info in next 64 bytes
3) mbr validation check in last 2 bytes.
It contains information about GRUB (or LILO in old systems).
So, in simple terms MBR loads and executes the GRUB boot loader.

GRUB
If you have multiple kernel images installed on your system, you can choose which one to be executed.
GRUB displays a splash screen, waits for few seconds, if you don’t enter anything, it loads the default kernel image as specified in the grub configuration file.
GRUB has the knowledge of the filesystem (the older Linux loader LILO didn’t understand filesystem).
Grub configuration file is /boot/grub/grub.conf (/etc/grub.conf is a link to this). The following is sample grub.conf of CentOS.
So, in simple terms GRUB just loads and executes Kernel and initrd images.

Kernel
Mounts the root file system as specified in the “root=” in grub.conf
Kernel executes the /sbin/init program
Since init was the 1st program to be executed by Linux Kernel, it has the process id (PID) of 1. Do a ‘ps -ef | grep init’ and check the pid.
initrd stands for Initial RAM Disk.
initrd is used by kernel as temporary root file system until kernel is booted and the real root file system is mounted. It also contains necessary drivers compiled inside, which helps it to access the hard drive partitions, and other hardware.
Once the kernel has extracted itself, it loads systemd, which is the replacement for the old SysV init program, and turns control over to it.


The startup process

systemd
systemd is the mother of all processes and it is responsible for bringing the Linux host up to a state in which productive work can be done
First, systemd mounts the filesystems as defined by /etc/fstab
/etc/systemd/system/default.target,
For a desktop workstation, this is typically going to be the graphical.target, which is equivalent to runlevel 5 in the old SystemV init
For a server, the default is more likely to be the multi-user.target which is like runlevel 3 in SystemV
The emergency.target is similar to single user mode.


SystemV Runlevel     systemd target         systemd target aliases     Description
              halt.target                       Halts the system without powering it down.

0             poweroff.target     runlevel0.target     Halts the system and turns the power off.

S             emergency.target                   Single user mode. No services are running; filesystems are not mounted. This is the most basic level of operation with only an emergency shell running on the main console for the user to interact with the system.

1             rescue.target         runlevel1.target     A base system including mounting the filesystems with only the most basic services running and a rescue shell on the main console.

2                           runlevel2.target     Multiuser, without NFS but all other non-GUI services running.

3             multi-user.target     runlevel3.target     All services running but command line interface (CLI) only.

4                           runlevel4.target     Unused.

5             graphical.target     runlevel5.target     multi-user with a GUI.

6             reboot.target         runlevel6.target     Reboot

              default.target                       This target is always aliased with a symbolic link to either multi-user.target or graphical.target. systemd always uses the default.target to start the system. The default.target should never be aliased to halt.target, poweroff.target, or reboot.target.

Init
Looks at the /etc/inittab file to decide the Linux run level.
Following are the available run levels

    0 – halt
    1 – Single user mode
    2 – Multiuser, without NFS
    3 – Full multiuser mode
    4 – unused
    5 – X11
    6 – reboot
Typically you would set the default run level to either 3 or 5

Runlevel programs


What is umask?
user file creation mode, when user creates any file, it has default file permissions.

umask u=rwx,g=x,o=
umask 067



What is network bonding in linux?
Network bonding is a process of combining more than 2 network interfaces to form a single network interface.

How to check the default route and routing table?
route -n
netstat -rn
ip

what cmd use for Error checking and Error Fixing?
fsck and efsck

what is LVM?
Logical partition

Physical Volume (PV), - pvs
Volume Group (VG) -vgs
Logical Volume (LV) - lvs

sda, sdb, sdc
fdisk -l

pvcreate /dev/sda1 /dev/sdb1 /dev/sdc1
pvs

# vgcreate -s 32M tecmint_add_vg /dev/sda1 /dev/sdb1 /dev/sdc1
vgs

vgdisplay tecmint_add_vg

lvs

Method 1: Creating Logical Volumes using PE Size’s
vgdisplay tecmint_add_vg

bc
1725PE/3 = 575 PE.
575 PE x 32MB = 18400 --> 18GB

# lvcreate -l (Extend size) -n (name_of_logical_volume) (volume_group)
# lvcreate -l 575 -n tecmint_documents tecmint_add_vg
# lvcreate -l 575 -n tecmint_manager tecmint_add_vg
# lvcreate -l 575 -n tecmint_public tecmint_add_vg

lvs

Method 2: Creating Logical Volumes using GB Size’s
# lvcreate -L 18G -n tecmint_documents tecmint_add_vg

# lvcreate -L 18G -n tecmint_manager tecmint_add_vg

# lvcreate -L 18G -n tecmint_public tecmint_add_vg

# lvcreate -L 17.8G -n tecmint_public tecmint_add_vg

Creating File System
# mkfs.ext4 /dev/tecmint_add_vg/tecmint_documents

# mkfs.ext4 /dev/tecmint_add_vg/tecmint_public

# mkfs.ext4 /dev/tecmint_add_vg/tecmint_manager

Mount
# mount /dev/tecmint_add_vg/tecmint_documents /mnt/tecmint_documents/

# mount /dev/tecmint_add_vg/tecmint_public /mnt/tecmint_public/

# mount /dev/tecmint_add_vg/tecmint_manager /mnt/tecmint_manager/

# df -h

Permanent Mounting
# vim /etc/fstab


Tell me linux boot sequence flow?
BIOS-MBR-BOOT LOADER - KERNAL - RUNLEVEL

what are inbuild firwall in linux?
IP tables

Selinux            - /etc/selinux/config
check selinux status     - getenforce

TCPwrappers
Understanding hosts.allow and hosts.deny
<services> : <clients> [: <option1> : <option2> : ...]

How to Use TCP Wrappers to Restrict Access to Services
To allow SSH and FTP access only to 192.168.0.102 and localhost and deny all others, add these two lines in /etc/hosts.deny:
sshd,vsftpd : ALL
ALL : ALL

and the following line in /etc/hosts.allow:
sshd,vsftpd : 192.168.0.102,LOCAL

To allow all services to hosts where the name contains example.com, add this line in hosts.allow:
ALL : .example.com

and to deny access to vsftpd to machines on 10.0.1.0/24, add this line in hosts.deny:
vsftpd : 10.0.1.

what is called .scratch pad of computer?
Cache memory is scratch pad of computer

How can you append one file to another in linux?
To append
cat file2 >> file1

Append 2 or more file in 1

Find a file using terminal
find . -name "process.txt"
cat file1 file2 > file3  

How to lock user account in linux?

usermod -L testuser
 passwd -l testuser


Lock or disable pass using passwd cmd
Expire the user account using usermod cmd or chage cmd
changing the shell using nologin cmd ( /sbin/nologin )

What is LDAP?

Installing OpenLDAP
yum -y install openldap openldap-servers openldap-clients
systemctl enable slapd

Configuring LDAP
ldappasswd

LDAP terminology
Entry (or object): every unit in LDAP considered an entry.
dn: the entry name.
o: Organization Name.
dc: Domain Component. For example, you can write likegeeks.com like this dc=likegeeks,dc=com.
cn: Common Name like the person name or name of some object.

which configuration file is required for ldap clients?
ldap.conf

what is the name of main configuration file name for ldap server?
sladap.conf

how will you verify ldap configuration file?
slaptest -u

what daemon is responsible for tracking events on you system?
syslogd-

what cmd can use to review boot messages?
dmesg

what is the name and path of the main system log?
/var/log/messages

what cmd is used to remove the password assigned to A group?
gpasswd -r

How can i check who are the users logged in my system?
users cmd

what is NIC bonding?

mode=0    (balance round robin)
mode=1    (Active backup)
mode=2    (Balance XOR)
mode=3    (Broadcast)
mode=4    (802.3ad)
mode=5    (Balance TLB)
mode=6    (Balance ALB)

 lspci | grep Eth
cd /etc/sysconfig/network-scripts/
eth1
eth2

cd /etc/udev/rules.d/70-persistent-ipoib.rules
see eth1 eth2

before start stop NetworkManager service

eth1 file
ONBOOT="yes"
MASTER=bond0
SLAVE=yes
TYPE=Ethernet

eth2 file
ONBOOT="yes"
MASTER=bond0
SLAVE=yes
TYPE=Ethernet

create bond script
cat ifcfg-bond0
IPADD
NETMASK
GATEWAY
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
BONDING_OPTS=mode=0 miimon=100        every 100ms check other interface working?

then restart the network service

check ip a

cat /proc/net/bonding/bond0    it will display both bonding interface


change mode
BONDING_OPTS=mode=1 miimon=100 primary=eth0

restart network service

How to upgrade centos 7 to 8?

https://www.tecmint.com/upgrade-centos-7-to-centos-8/

Step 1: Install the EPEL Repository

# yum install epel-release -y
Step 2: Install yum-utils Tools
# yum install yum-utils

Thereafter, you need to resolves RPM packages by executing the command.
# yum install rpmconf
# rpmconf -a

Next, perform a clean-up of all the packages you don’t require.
# package-cleanup --leaves
# package-cleanup --orphans

Step 3: Install the dnf in CentOS 7
Now install dnf package manager which is the default package manager for CentOS 8.

# yum install dnf

You also need to remove the yum package manager using the command.
# dnf -y remove yum yum-metadata-parser
# rm -Rf /etc/yum


Step 4: Upgrading CentOS 7 to CentOS 8
We are now ready to upgrade CentOS 7 to CentOS 8, but before we do so, upgrade the system using the newly install dnf package manager.
# dnf upgrade

Next, install CentOS 8 release package using dnf as shown below. This will take a while.
# dnf install http://mirror.centos.org/centos/8/BaseOS/x86_64/os/Packages/{centos-linux-repos-8-2.el8.noarch.rpm,centos-linux-release-8.4-1.2105.el8.noarch.rpm,centos-gpg-keys-8-2.el8.noarch.rpm}

Next, upgrade the EPEL repository.
dnf -y upgrade https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm

After successfully upgrading the EPEL repository, remove all the temporary files.
# dnf clean all

Remove the old kernel core for CentOS 7.
# rpm -e `rpm -q kernel`

Next, be sure to remove conflicting packages.
# rpm -e --nodeps sysvinit-tools

Thereafter, launch the CentOS 8 system upgrade as shown.
# dnf -y --releasever=8 --allowerasing --setopt=deltarpm=false distro-sync

Step 5: Install the New Kernel Core for CentOS 8
To install a new kernel for CentOS 8, run the command.
# dnf -y install kernel-core

Finally, install CentOS 8 minimal package.
# dnf -y groupupdate "Core" "Minimal Install"

Now you can check the version of CentOS installed by running.
# cat /etc/redhat-release


What is systemctl ?, how to check running services?
# systemctl --type=service --state=running


How to roll back in linux?
 yum install httpd
httpd -version
yum history
yum history undo 7

File format in window and linux?
FAT32
NTFS

EXT4    
stands for 4th extended file system
introduce in RHE6
Maximum volume size is 1 Ebytes
Maximum file size is 16 TB
Max file name 255 byte
Max num of files 4 billion
cmd to format any partion mkfs.ext4 /dev/sda4
Directory can contain a max of 64,000 subdirectory


xfs
XFS stands extends file file system
intro in RHEL7
Max volume size in 8 Ebytes
Max file size is 8 Ebytes
Max file name is 255 bytes
Max num of files 2^64
cmd format any partition mkfs.xfs. /dev/sda4
no limit for directory

Differnce b/w RHEL8, 7 and 6?






How to create local REPO?
createrepo    create s/w pacakage
yum-utils    manage repo

mkdir /REPO    Create a dir
cp -vR * /REPO    copy all file of repository to /REPO folder

vi /etc/yum/repo.d/yum.repo    create yum.repo file in yumr repo.d
[localrepo]
name=centOS
baseurl=file:///REPO
enabled=1
gpgcheck=0

createrepo -v /REPO
yum repolist

OR Download a local copy of the official CentOS repositories to your server.

sudo reposync -g -l -d -m --repoid=base --newest-only --download-metadata --download_path=/var/www/html/repos/

sudo reposync -g -l -d -m --repoid=centosplus --newest-only --download-metadata --download_path=/var/www/html/repos/

sudo reposync -g -l -d -m --repoid=extras --newest-only --download-metadata --download_path=/var/www/html/repos/

sudo reposync -g -l -d -m --repoid=updates --newest-only --download-metadata --download_path=/var/www/html/repos/

the options are as follows:
    –g – lets you remove or uninstall packages on CentOS that fail a GPG check
    –l – yum plugin support
    –d – lets you delete local packages that no longer exist in the repository
    –m – lets you download comps.xml files, useful for bundling groups of packages by function
    ––repoid – specify repository ID
    ––newest-only – only download the latest package version, helps manage the size of the repository
    ––download-metadata – download non-default metadata
    ––download-path – specifies the location to save the packages

createrepo /var/www/html/repos


How to check already installed pacakage dependencies?
systemctl list-dependencies graphical.target | target

switch to another target
systemctl isolate target.target

systemctl get-defaults

halt    means bring down all services but not power off
vi /etc/grub2.cfg     its generated by system, whenever your system boot up, it will read properly, what to load, what to not load, what infra, kernal, when you want to reboot you will come here and you edit it like in rescue/emergency mode

vi /etc/dracut.conf    this default file related to your kernal, all changes you want to do in your kernal you can do it here.

If you want to check more about kernal and dracut check man page

man dracut.bootup


Rescue mode is equivalent to single user mode and requires the root password.
Rescue mode - when it is unable to complete a regular booting process.
but it does not activate network interfaces and multiple users mode

Emergency mode provides the most minimal environment.
when the system is unable to enter rescue mode.
In emergency mode, the system mounts the root file system as read-only,
does not attempt to mount any other local file systems,
does not activate network interfaces.

Bootup into Emergency mode(target) - during boot - grub2 - press the e
Add parameter at the end of the linux16 line :
systemd.unit=emergency.target
Ctrl+a , Ctrl+e
Ctrl+x

Bootup into Rescue mode(target) - GRUB2 menu option - selected boot into rescue - press the e key
Add parameter at the end of the linux16 line :
systemd.unit=emergency.target
Ctrl+a , Ctrl+e
Ctrl+x

Switch to Emergency mode(target)
# systemctl emergency

Switch to Rescue mode(target)
# systemctl rescue

To prevent systemd from sending informative message:
# systemctl --no-wall emergency
# systemctl isolate emergency.target


how to create any service in Linux?
go to /etc/systemd/system/prometheus.service

To use the newly created service, reload systemd.

# sudo systemctl daemon-reload