Monday, July 26, 2021

Docker - Docker question & answer

Docker 


docker id a adv. version of virtualization


Docker Hub
    |
    |
container/OS
docker engine
    OS
   H/W


VM    VM
OS    OS
EXSI - Hyper
Physical H/W


Docker is an source centralized platform designed to create deploy and run applications.
Docker uses container on the host OS to run applications. It allows applications to use this same linux kernal as a system on the host computer, rather than creating a while virtual OS

We can install docker on only OS but docker engine runs natively on linux distribution
Docker writtern in 'go' lang.
Docker is a tool that performs OS level virtualization, also known as containerization
Before docker many users faces that problem that a particular code is running in the developer's system but in the users system
Docker     is set of platform as a service that was OS level virtualization whereas VMware was H/W level virtualisation


Advantages of Docker
No Pre-allocation of RAM
CI Efficiency - docker enables you to build a container image and use that same image across every step of the deployment process
less cost


Developer
   |
   |
   |
Docker file ---Docker engine/daemon(image)----Docker Hub(Registry)
            |
            |
            container





Docker Daemon -
Docker daemon runs on the host OS
It is responsible for running containers to, manages docker services.
Docker Daemon can communicate with other daemons

Docker client
Docker users can interact with docker daemon through a client
Docker client uses commands and Rest API to communicate with the docker doemon
when a client runs any server cmd on the docker client terminal, the client terminal sends these docker cmd to the docker daemon
It is possible for docker client to communicate with more than one damon

Docker Host
Docker host is used to provide an envionment to execute and run applications. it contains the docker daemon. images, containers, networks and storage

Docker Hub/Registry
Public
Private

Docker Images
Docker images are the read only binary templates used to create docker containers

OR

Single file with all dependencies and configuration required to run a program

Ways to create an images
1. Take Image from docker hub
2. create image from docker file.
3. create image from existing container

Docker container
contains hold the entire packages that is needed to run the application
 

Or

In other words, we can say that, the image is a template and the container is a copy of that template
Container is like a virtual machine
images becomes container when they run on docker engine.

yum install docker -y

To see all images present in your local
docker images

To find out images in docker hub
docker search jenkins

To downloads image from dockerhub to local machine
docker pull jenkins

To give name to container
docker run -it --name bhupender ubuntu /bin/bash    run (create and start) -it (interactive mode / terminal)

To check, service is start or not
service docker status

To start container
docker start bhupender

To go inside container
docker attach bhupender

To see all containers
docker ps -a

To see only running containers
docker ps

To stop container
docker stop bhupinder

To delete container
docker rm bhupinder

To delete image
docker rmi centos7

rename container name
docker container rename c7fdscsd webserver

To show RAM of container
docker container stats

docker container top 50ef6499052d
UID                 PID                 PPID                C                   STIME               TTY                 TIME                CMD
root                27777               27755               0                   20:04               pts/0               00:00:00            /bin/bash

exit from container without stop or exit state of container

ctrl+q

docker container stats
CONTAINER ID   NAME            CPU %     MEM USAGE / LIMIT     MEM %     NET I/O       BLOCK I/O     PIDS
50ef6499052d   musing_austin   0.00%     2.043MiB / 7.561GiB   0.03%     2.49kB / 0B   10.4MB / 0B   1

Whereas in docker exec command you can specify which shell you want to enter into. It will not take you to PID 1 of the container. It will create a new process for bash. docker exec -it < container-id > bash. Exiting out from the container will not stop the container.

Create container
docker run -it --name bhupicontainer centos /bin/bash
cd tmp/

Now create one file inside this tmp directory
touch myfile

now if you want to see the difference b/w the base image & changes on it then
docker diff bhupicontainer

C /root
A /root/.bash_history
C /tmp
A /tmp/myfile

Now create image of this container
docker commit newcontainer updateimage

docker images


Now create container from this image
docker run -it --name rajcontainer updateimage /bin/bash

ls
cd /tmp
ls
myfile

creating image from dockerfile
dockerfile is basically a text file, it contains some set of instruction
Automation of docker image creation

Docker components
FROM -
For base image this cmd must be on top of the dockerfile.

RUN -
To execute cmd, it will create a layer in image.

MAINTAINER -
Author / Owner / Description

COPY -
copy files from local system (docker vm), we need to provide route, destination, we cant download file from internet and only remote repo.

ADD -
Similar to copy but, it provides files from internet, also we extract file at docker image side.

EXPOSE -
To expose ports, such as port 8080 for tomcat, port 80 for nginx etc.

WORKDIR -
To set working directory for a container.

CMD
Execute cmd but during container creation

COPYPOINT -
Similar to cmd, but has higher priority over cmd, 1st cmd will be executed by ENTRYPOINT Only.

ENV -
Environment variables

Dockerfile
1. create a file named dockerfile
2. add instruction in dockerfile
3. Build dockerfile to create image
4. Run image to create container


vi Dockerfile-----------------------
FROM ubuntu
RUN echo "Technically giftgu" > /tmp/testfile
------------------------------------


To create image out of dockerfile
Docker build -t myimg
Docker ps -a
docker images

Now create container from the above image
docker run -it --name mycontainer myimg /bin/bash
cat /tmp/testfile

................................................
vi Dockerfile
FROM centos
WORKDIR /tmp
RUN echo "Welcome to nonstop step" > /tmp/testfile
ENV myname manjeetyadav
COPY testfile1 /tmp
ADD test.tar.gz /tmp

.................................

Volume
Volume is simply a directory inside our container
1stly, we have to declare the directory as a volume and then share volume
Even if we stop container, still we can access volume
Volume will be created in one container
You cant create volume from existing container
You can share one volume across any number of containers
Volume will not be installed when you update an image
You can mapped volume in two ways
Container - container
Host - container 


Benifits of volume
Decoupling container from storage
Share volume among different container
Attach volume to containers
On deleting container volume does not delete

..........................
Creating file from Dockerfile
create a Dockerfile and write
FROM centos
VOLUME ["/myvolume"]

Then create image from this dockerfile
docker build -t myimage .

Now create a container from the image & Run
docker run -it --name container1 myimage /bin/bash

Now do ls, you can see myvolume1

Now, share volume with another container
Container1 -- Container2

docker run -it --name container2(new) --privileged=true --volumes-from container1 centos /bin/bash

Now after creating container2, myvolumes is visible whatever you do in one volume, can see from other volume
touch /myvolume1/samplefile
docker start container1
docker attach container1
ls /myvolume1
you can see samplefile

.....................................
Now try to create volume by using cmd
docker run -it --name container3 -v /volume2 centos /bin/bash

do ls - cd /volume2

Now create one file cont3file and exit
Now create one more container, and share volume

docker run -it --name container4 --privileged=true  --volumes-from container3 centos /bin/bash

Now you are inside container, do ls you can see volume2

Now create one file inside this volume and then check in container3, you can see that file


...........................................................
Volume (Host - container)
docker files in /home/ec2-user
docker run -it --name hostcount -v /home/ec2-user:/rajput --privileged=true centos /bin/bash

cd /rajput
do ls, now you can see all files of host machine

touch rajputfile     (in container)
exit

Now check in EC2 machine, you can see this file

.......................
Some other cmd for volume

docker volume ls
docker volume create <Volume name>
docker volume rm <volume name>
docker volume prune    {it removed all unused docker volume}
docker volume inspect <volume name>
docker container inspect <container name>

.................................
Expose (port expose)

docker run -td --name techserver -p 80:80 centos    (d means daemon, it will run the docker container, but we not go inside it)

diff in EXPOSE cmd and -p     if do EXPOSE cmd, then container can communicate to another container, cant go to host machine.
-p means expose for host and outside also

docker ps
docker port techserver            it will show all port of the container

docker exec -it techserver /bin/bash    to go inside the container
yum  install httpd -y
cd /var/www/html
echo "Subscribe Tech Guftugu" > index.html
systemctl start httpd


................................................
docker run -td --name myjenkins -p 8080:8080 jenkins

................................


diff b/w docker attach and docker exec?
Docker exec creates a new process in the container's environment while docker attach just connect the standard input/output of the main process inside the container to corresponding standard input/output error of current terminal

docker exec is specifically for running new things in a already started container, be it a shell or some other process.

what is the diff b/w expose and publish a docker?
basically you have three options
1. neither specify expose nor -p
2. only specify expose
3. specify expose and -p

1. if you specify neither expose nor -p, the service in the container will only be accessible from inside the container itself
2. if you expose a port, this service in the container is not accessible from outside docker, but from inside other docker containers, do this is good for inter-container communication.

3.if you expose and -p a port, the service in the container is accessible from anywhere, even outside docker.

................................................

upload container image to docker hub - pull/push

docker run -it ubuntu /bin/bash
Now create some files inside container
Now create image of this container

docker commit container1 image1

Now create account in hub docker.com

Now on your instance
docker login

enter username/password

Now give tag to your image

docker tag image1 docker id/newimage    (newimage any name)
docker tag image1 manjeetyadav19/project1

docker push dockerid/newimage
docker push manjeetyadav19/project1

Now you can see this image in docker hub account

Now create one instance in tokyo region and pull image from hub

docker pull manjeetyadav19/project1
docker run -it --name mycontainer manjeetyadav19/project1 /bin/bash

...............................................
some important cmd

stop all running containers : docker stop $(docker ps -a -q)

delete all stopped contains : docker rm $(docker ps -a -q)

delete all images : docker rmi -f $(docker images -q)



Step 1 — Creating an Independent Volume


docker volume create --name DataVolume1

docker run -ti --name=Container1 -v DataVolume1:/datavolume1 ubuntu

echo "Share this file between containers" > /datavolume1/Example.txt

check in shared volume

We can verify that the volume is present on our system with docker volume inspect:
docker volume inspect DataVolume1

Step 2 — Creating a Volume that Persists when the Container is Removed

docker run -ti --name=Container2 -v DataVolume2:/datavolume2 ubuntu

When we restart the container, the volume will mount automatically:
docker start -ai Container2

Docker won’t let us remove a volume if it’s referenced by a container. Let’s see what happens when we try:
docker volume rm DataVolume2

Removing the container won’t affect the volume. We can see it’s still present on the system by listing the volumes with docker volume ls:
docker volume ls

And we can use docker volume rm to remove it:
docker volume rm DataVolume2

Step 3 — Creating a Volume from an Existing Directory with Data

As an example, we’ll create a container and add the data volume at /var, a directory which contains data in the base image:
docker run -ti --rm -v DataVolume3:/var ubuntu


All the content from the base image’s /var directory is copied into the volume, and we can mount that volume in a new container.

This time, rather than relying on the base image’s default bash command, we’ll issue our own ls command, which will show the contents of the volume without entering the shell:
docker run --rm -v DataVolume3:/datavolume3 ubuntu ls datavolume3


Step 4 — Sharing Data Between Multiple Docker Containers

Create Container4 and DataVolume4

Use docker run to create a new container named Container4 with a data volume attached:
docker run -ti --name=Container4 -v DataVolume4:/datavolume4 ubuntu

Create Container5 and Mount Volumes from Container4
docker run -ti --name=Container5 --volumes-from Container4 ubuntu

Start Container 6 and Mount the Volume Read-Only
docker run -ti --name=Container6 --volumes-from Container4:ro ubuntu

Now that we’re done, let’s clean up our containers and volume:
docker rm Container4 Container5 Container6
docker volume rm DataVolume4

   

Overlay Networks

An overlay network uses software virtualization to create additional layers of network abstraction running on top of a physical network. In Docker, an overlay network driver is used for multi-host network communication. This driver utilizes Virtual Extensible LAN (VXLAN) technology which provide portability between cloud, on-premise and virtual environments. VXLAN solves common portability limitations by extending layer 2 subnets across layer 3 network boundaries, hence containers can run on foreign IP subnets.


docker network create -d overlay --subnet=192.168.10.0/24 my-overlay-net


Macvlan Networks

The macvlan driver is used to connect Docker containers directly to the host network interfaces through layer 2 segmentation. No use of port mapping or network address translation (NAT) is needed and containers can be assigned a public IP address which is accessible from the outside world. Latency in macvlan networks is low since packets are routed directly from Docker host network interface controller (NIC) to the containers.

Note that macvlan has to be configured per host, and has support for physical NIC, sub-interface, network bonded interfaces and even teamed interfaces. Traffic is explicitly filtered by the host kernel modules for isolation and security. To create a macvlan network named macvlan-net, you’ll need to provide a --gateway parameter to specify the IP address of the gateway for the subnet, and a -o parameter to set driver specific options. In this example, the parent interface is set to eth0 interface on the host:

docker network create -d macvlan \
--subnet=192.168.40.0/24 \
--gateway=192.168.40.1 \
-o parent=eth0 my-macvlan-net

 

Docker Machine 

 these two broad use cases.

I have an older desktop system and want to run Docker on Mac or Windows
I want to provision Docker hosts on remote systems

 

what is cname
and c group in linux

No comments:

Post a Comment