Saturday, September 4, 2021

Interview question - 2 latest

 Dynamic inventory


dynamic inventory and how you can create one. You have to understand what Ansible accepts as an inventory file.

Ansible expects a JSON in the below format.

 Ansible expects a dictionary of groups (each group having a list of group>hosts, and group variables in the group>vars dictionary), and a _meta dictionary that stores host variables for all hosts individually (inside a hostvars dictionary).

You can run ./customdynamicinventory.py –list
And it will generate the output in standard JSON format as shown in below screenshot.

 
static inventory file
 
 
 

ansible all -i  customdynamicinventory.py -m ping 

It will try to ping all the hosts mentioned in the CSV. Let’s check it


 

Best practices in ansible role

First, we will understand the complete directory structure on Ansible Role:
 
  • Defaults: The default variables for the role are stored here inside this directory. These variables have the lowest priority.
  • Files: All the static files are being stored here which are used inside the role.
  • Handlers: All the handlers are being used here but not inside the Task directory. And automatically called upon from here.
  • Meta: This directory contains the metadata about your role regarding the dependencies which are being required to run this role in any system, so it will not be run until the dependencies inside it are not resolved.
  • Tasks: This directory contains the main list of the tasks which needs to be executed by the role.
  • Vars: This directory has higher precedence than the defaults directory and can only be overwritten by passing them On the command line, In the specific task, or In a block.
  • Templates: This directory contains the Jinja templates inside it. Basically, all the variablized templates which are rendered into static files during runtime are stored here.

  •  

 

Whitespace and Comments

someone using your role in the future can easily understand it properly.

 YAML format

when running the role gives the error for Invalid Syntax due to bad indentation format. And writing in proper Indentation makes your role look beautiful.

Always Name Tasks
 
Version Control
Keep your roles and inventory files in git and commit when you make changes to them.
 
Variable and Vaults
 
The best approach to perform is to start with a group_vars subdirectory containing two more subdirectories inside it naming “Vars” and “Vaults”
 
Inside the “Vars”  directory, define all the variables including sensitive variables also.
 
Now, copy those sensitive variables inside the “Vault” directory while using the prefix “vault_*” for the variables.
 
Now you should adjust the variables in the “Vars” to point the matching “vault_*” variables using jinja2 syntax and ensure that the vault file is vault encrypted.
 
Roles for multiple OS
 
Roles should be written in a way that they can run on multiple Operating systems. Try to make your roles as generic as you can. But if you have created a role for some specific kind of operating system or some specific application, then try to explicitly define that inside the role name.
 
Single role Single goal
 
Avoid tasks within a role that are not related to each other. Don’t build a common role. It’s ugly and bad for the readability of your role.
 
Other Tips:
 
  • Use a module if available
  • Try not to use command or shell module
  • Use the state parameter
  • Prefer scalar variables
  • Set default for every variable
  • If you have multiple roles related to each other than try to create a common variable file for all of them which will be called inside your playbook
  • Use “copy” or “template” module instead of “lineinfile” module
  • Make role fully variablized
  • Be explicit when writing tasks. Suppose, if you are creating a file or directory then instead of defining src and destination, try to define owner, group, mode, etc.
Summary:
 
  • Create a Role that could be used further.
  • Create it using proper modules for better understanding.
  • Do proper comments inside it so that it would be understood by someone else also.
  • Use proper Indentation for the YAML format.
  • Create your Role variables and also secure them using a vault.
  • Create a Single role for a single goal.
default vs var in ansible role
 

As we know that ansible roles have a wide directory structure that looks something like this.

$ tree -d
.
├── defaults
├── files
├── handlers
├── media
├── meta
├── molecule
│   └── default
│       └── tests
├── tasks
└── templates

10 directories
  • defaults mean “default variables for the roles” and vars mean “other variables for the role”.
  • The priority of the vars is higher than that of defaults.
    For example, consider a variable named ‘version’ defined in the defaults have value ‘5.0.1’ and the same variable defined in vars have value ‘7.1.3’, so the final value that will be taken into account is the value defined in vars i.e., ‘7.1.3’.

Due to my limited understanding of this, I used to define all variables in defaults and whenever needed to override them, I declared those variables in vars.

in terms of content, “static” with constant value and “dynamic” with changing value. According to the ansible definition, the static variables should be placed in default and the dynamic should be placed in vars.

 

For Example, 

Download URL for tomcat is “https://archive.apache.org/dist/tomcat/tomcat-version/

“https://archive.apache.org/dist/tomcat/tomcat-” -> Fixed Part

“version” -> Varying Part

Here, we can make a variable for the fixed value in the defaults with any name suppose “tomcat_base_url” and varying value should be declared in vars, let it be with any name suppose “tomcat-version”.

So, whenever I have to use the tomcat download url in the role it will be: “{{ tomcat_base_url }}/{{ tomcat-version }}”.

defaults: contain variables which user have to not alter.

vars: contain variables that require input by user.

 

Docker image security
https://cheatsheetseries.owasp.org/cheatsheets/Docker_Security_Cheat_Sheet.html 

Start with Trusted Images

Content Trust

Content Trust means you can verify the integrity and publisher for all information you receive across a channel. Unfortunately, Docker does not enable content trust by default. When running Docker build, create, pull, push, and/or run, add the flag --disable-content-trust=false to your command to enable Content Trust.

The Kernel

Docker creates a set of namespaces and control groups when you start a container with docker run.

Namespaces isolate processes within a container; these processes cannot see (nor affect) processes running in another container or the host system.
Control Groups implement resource accounting and limiting, providing metrics and ensuring each container gets its fair share of resources (like RAM, CPU, disk I/O, etc.). This infrastructure protects against a single container spiraling out of control and exhausting one or more resources, taking down the entire system. Control Groups really shine on multi-tenant platforms like public and private PaaS, guaranteeing a consistent uptime and performance, even in failing applications.
 
Namespaces Docker does not set these by default. User namespaces allow you to map a container’s root user to a non-root user on the host. Start the daemon with --userns-remap to enable it. You can pass any of the following formats:
  1. uid
  2. uid:gid
  3. username
  4. username:groupname
  5. default (automatically maps to dockremap; if this user and group do not exist, Docker creates them)

Default user management:

dockerd --userns-remap=default
Attacking the Docker Daemon

Rule #11 - Lint the Dockerfile at build time
Ensure a USER directive is specified
Ensure the base image version is pinned
Ensure the OS packages versions are pinned
Avoid the use of ADD in favor of COPY
Avoid curl bashing in RUN directives

RULE #0 - Keep Host and Docker up to date
RULE #1 - Do not expose the Docker daemon socket (even to the containers)
RULE #2 - Set a user
RULE #3 - Limit capabilities (Grant only specific capabilities, needed by a container)
RULE #4 - Add –no-new-privileges flag
RULE #5 - Disable inter-container communication (--icc=false)
RULE #7 - Limit resources (memory, CPU, file descriptors, processes, restarts)
RULE #8 - Set filesystem and volumes to read-only
 

 How to enable communication to 2 networks

Bridge:- The bridge network is a private internal default network, built on the host by the docker. Each container has an internal IP address and can reach other container using this internal IP. Bridge networks are typically used when the applications run in containers needs to communicate on standalone setup. bridge network that allows containers to communicate with each other.

It creates a virtual network between host and containers which perform all the network address translations(NAT).

create bridge n/w

docker network create --driver bridge my_isolated_bridge_network
docker network inspect my_isolated_bridge_network
docker network ls
NETWORK ID          NAME                         DRIVER
fa1ff6106123        bridge                       bridge
803369ddc1ae        host                         host
3b7e1ad19ee8        my_isolated_bridge_network   bridge
01cc882aa43b        none                         null 
docker run --net=my_isolated_bridge_network --name=my_psql_db postgres 


 Host:- This driver removes the network isolation between the docker host and the docker containers for direct use of host networking. But with this, you will not be able to run multiple containers on the same host, on the same port as all containers in the host network will have common ports of the host system.

Containers that are running with the host network share’s host networking namespace. The network performance would be better in this networking because it doesn’t require NAT.


 

 Overlay:- The overlay driver is designed to enable communication between docker containers in different networks that are hidden from each other. Those networks may be public or private. If there are two hosts and each one runs a docker, the Overlay network will help to create a subnet at the top of these two hosts and each container connected to this Overlay network will be able to communicate with other containers.
 

 
 
For creating an overlay network, we need a key/value store where all host system’s containers information will be stored for communication, 
for example:- zookeeper, etcd, or consul.
Also, overlay network sits over the top of host networks, some firewall level configuration will also be required. 
If you want to setup a secure overlay network, you can enable the IPSec encryption.

Open the following ports between each of your hosts:

ProtocolPort Purpose
udp 4789 data
tcp/udp 7946 control                      

Check your key-value store service documentation; your service may need more ports open.

Create an overlay network by configuring options on each Docker daemon you wish to use with the network. You may set the following options:

 
OptionDescription
--cluster-store=PROVIDER://URL Describes the location of the key-value store service
--cluster-advertise=HOST_IP
or
--cluster-advertise=HOST_IFACE:PORT
The IP address or interface corresponding to the clustering host
--cluster-store-opt=KEY-VALUE OPTIONSAdditional options, like a TLS certificate
 
 docker network create --driver overlay my_multi_host_network
 docker run -itd -net=my_multi_host_network my_python_app 
 
MACvlan Scenario:  Say, you have built Docker applications(legacy in nature like network traffic monitoring, system management etc.) which is expected to be directly connected to the underlying physical network. In this type of situation, you can use the macvlan network driver to assign a MAC address to each container’s virtual network interface, making it appear to be a physical network interface directly connected to the physical network. 
 
docker network create -d macvlan --subnet=100.98.26.43/24 --gateway=100.98.26.1  -o parent=eth0 pub_net
Verifying MacVLAN network
 
docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
871f1f745cc4        bridge              bridge              local
113bf063604d        host                host                local
2c510f91a22d        none                null                local
bed75b16aab8        pub_net             macvlan             local
docker  run --net=pub_net --ip=100.98.26.47 -itd alpine /bin/sh
 
RUN vs CMD & ENTRYPOINT

In a nutshell

  • RUN executes command(s) in a new layer and creates a new image. E.g., it is often used for installing software packages.
  • CMD sets default command and/or parameters, which can be overwritten from command line when docker container runs.
  • ENTRYPOINT configures a container that will run as an executable.

The difference is ENTRYPOINT command and parameters are not ignored when Docker container runs with command line parameters

 
Shell and Exec forms
(RUN, CMD and ENTRYPOINT)

Shell form

<instruction> <command>

Examples:

RUN apt-get install python3
CMD echo "Hello world"
ENTRYPOINT echo "Hello world"

Exec form
<instruction> ["executable", "param1", "param2", ...]

Examples:

RUN ["apt-get", "install", "python3"]
CMD ["/bin/echo", "Hello world"]
ENTRYPOINT ["/bin/echo", "Hello world"]

When instruction is executed in exec form it calls executable directly, and shell processing does not happen. For example, the following snippet in Dockerfile
ENV name John Dow
ENTRYPOINT ["/bin/echo", "Hello, $name"]

Hello, $name

How to run bash?
If you need to run bash (or any other interpreter but sh), use exec form with /bin/bash as executable. In this case, normal shell processing will take place. For example, the following snippet in Dockerfile

ENV name John Dow
ENTRYPOINT ["/bin/bash", "-c", "echo Hello, $name"]

Hello, John Dow


RUN

RUN instruction allows you to install your application and packages requited for it. It executes any commands on top of the current image and creates a new layer by committing the results. Often you will find multiple RUN instructions in a Dockerfile.

RUN has two forms:


RUN <command> (shell form)
RUN ["executable", "param1", "param2"] (exec form)

CMD

CMD instruction allows you to set a default command, which will be executed only when you run container without specifying a command. If Docker container runs with a command, the default command will be ignored. If Dockerfile has more than one CMD instruction, all but last CMD instructions are ignored.

CMD has three forms:

CMD ["executable","param1","param2"] (exec form, preferred)
CMD ["param1","param2"] (sets additional default parameters for ENTRYPOINT in exec form)
CMD command param1 param2 (shell form)

Again, the first and third forms were explained in Shell and Exec forms section.
The second one is used together with ENTRYPOINT instruction in exec form.
It sets default parameters that will be added after ENTRYPOINT parameters if container runs without command line arguments.


CMD echo "Hello world"

when container runs as docker run -it <image> will produce output

Hello world

but when container runs with a command, e.g., docker run -it <image> /bin/bash, CMD is ignored and bash interpreter runs instead:

ENTRYPOINT

ENTRYPOINT instruction allows you to configure a container that will run as an executable.
It looks similar to CMD, because it also allows you to specify a command with parameters.
The difference is ENTRYPOINT command and parameters are not ignored when Docker container runs with command line parameters.
(There is a way to ignore ENTTRYPOINT, but it is unlikely that you will do it.)

ENTRYPOINT has two forms:

ENTRYPOINT ["executable", "param1", "param2"] (exec form, preferred)
ENTRYPOINT command param1 param2 (shell form)

Exec form

Exec form of ENTRYPOINT allows you to set commands and parameters and then use either form of CMD to set additional parameters that are more likely to be changed.
ENTRYPOINT arguments are always used, while CMD ones can be overwritten by command line arguments provided when Docker container runs.
For example,

ENTRYPOINT ["/bin/echo", "Hello"]
CMD ["world"]

when container runs as docker run -it <image>

Hello world

but when container runs as docker run -it <image> John
Hello John

Shell form
Shell form of ENTRYPOINT ignores any CMD or docker run command line arguments.

The bottom line
Use RUN instructions to build your image by adding layers on top of initial image.

Prefer ENTRYPOINT to CMD when building executable Docker image and you need a command always to be executed.
Additionally use CMD if you need to provide extra default arguments that could be overwritten from command line when docker container runs.

Choose CMD if you need to provide a default command and/or arguments that can be overwritten from command line when docker container runs.
 

No comments:

Post a Comment