Storage,
Block storage
EBS Elastic Block storage
EFS (Elastic File System)
Emphemeral storage - Instance Storage - Temporary Storage
EBS volume can attach with one EC2 at a time
Best practice for detached volume, and attach with another EC2, 1st unmound from 1st EC2, and then attache to 2nd one.
If a EC2 running in region 1a and we created volume in region 1c, we cant attach that volume to 1a.
sol. create a snapshot, and again create volume by snapshot in correct region.
Increase root volume size without any downtime.
Sol. increase size from volume.
1st should know about file system on system.
suppose linux, and xfs file system, run below cmd
growpart /dev/xvda 1
xfs_growfs -d /
df -h
if mkfs
resize2fs /dev/sdb1
gp2
cant manually increase decreas ipos
IPOS - after 33 gb 3 ipos per gb
300/3000 - 3000 is burst value, it will collect per hour, and get use in emergency time.
io1
manually increase IPOS count
IPOS - per gp 50 ipos
ssd performace major on IPOS
HDD performace major on the bases of throuput
st1
baseline 40MB/s per TB
sc1
baseline 12MB/s per TB
Magnetic
No ipos
No Throuput
User case - you need min cost for EBS
requirment - 1000GB
IPOS - 3000
charge for 1000gb gp2 lower in compare to io1
User case - you need to create storage with 32000IPOS
create gp2 raid 0 with 5999GB
How to decrease EBS volume size, 2ndry volume if data also exist in your volume?
create a another volume with lower size, attach with EC2 machine,
formate and rsync the data to lower size volume, and delete higher size volume.
EFS are equal to NFS.
Create EFS volume, define security group, allow ip or range of ip, and region also in which you want to mount.
EFS can also mount with other VPC, on- prem etc.,
For Other VPC, create VPC peering 1st, and the follow amazon instruction.
For on prem, local system mount, connect AWS direct connect, or AWS VPN connection. 1st.
In all 3 block storage EBS, EFS, and ephemeral storage, ephermeral is fastest.
On stop, start data will loss, on reboot data will not loss.
Ephemeral User case
1.When High storage application need to testing of application.
2.Game companies using sometime need high storage Machine they can use it.
AWS EC2 Instance Type
Use Case:
If interviewer asked about 8gb instance type, or 32 gb instance type.
AWS ec2 instance pricing model
Ondemand
Partial hours converted to the full hours.
No Capacity guarantee. mainly production server install on Reserve instances.
On demand drawbacks:
Its costly in compare to reserve.
No capacity guaranty - sometime when you running critical application.
Reserve instance.
Cheaper in compare to On demand.
60
No-upfront -
partial upfront - 50
full upfront - 40
Drawbacks of Reserve instance:
Need to pay charge for whole year.
even we use instance or not.
term contract for reserve instance.
- 1 year
- 3 year
Scheduled reserved instance.
can schedule for some time.
like daily 4 hours, weekly 24 hours etc.
Spot instance
On-Demand
reserved-instance
scheduled instance
In all above 3, when customer not using all resorces, so un-used resorces they bid in market, and on spot market price it will sell to customer for sometime.
Maximum 6 hours can reserve.
User case
In testing environment we can use.
In auto-scaling we can use this instance.
Dedicated Instance
where we not using shared mode of instances, means, we have dedicated server only for EC2 instance.
rate are same like we have in
On-Demand
reserved-instance
scheduled instance
But it will charge extra for dedicated hypervisor only.
Note: you have to pay extra again if you change region again.
Dedicated Host
When we dont want to move our instance hardware, and it will be dedicated host.
User case.
When we have license activation, and we have to do with hardware only like with MAC address.
we use Dedicated host instance.
Launch Template
We create a template for EC2 instance like
instance type, image, EBS volume, keypair, security group.
Use case
We don't need to select all conf. again.
If we using automation through ansible, lambada, we can still use launch template.
Capacity Reservation
On-demand
Scheduled Reserved Instance
Spot Instance
Dedicated instance
Dedicated host
Launch Template
Reservation instance - We reserving EC2 and capacity also.
Capacity instance - we just reserving capacity only on hypervisor, so suppose we launch On-demand instance on capacity reserve, so we dont need to wait in queue for launch, and capacity reserve is cheaper also in compare to reserve instance.
Instanace Type
IAM.
Securing root account.
Web -
AWS cli - max 2 access key can generate
Web
Password -strong passord, or delete password
MFA - Use MFA for webui
Access key(access key/secret key) - for AWS cli, delete root access key, and create AMI user for access. max 2 access key can generate for root user.
Create AMI user.
Programmatic access - enable access key - used by AWS API,CLI,SDK, and other development tools
Console access - enable pass - allow user to console
Set Permission
Add users to group
copy permission frm existing user
Attach existing policies directly
How to set resource base policy or e.g. In 10 EC2 particular instance start/stop access to a user?
How to create a policy?
Visiual Editor
Json
In Json-
can create search from google, or
use AWS policy generator - https://awspolicygen.s3.amazonaws.com/policygen.html
Group Policy & Permission.
When you need to provide same type of permission to multiple user, use group policy.
Create multiple group with different permission, and assign user in the group.
IAM Role:
1. Use case:- When 2 different service need to communicate, we can use IAM role.
2. Use Case: When some users needs to access different account services, but those user actually not a user of other account, we can provide cross account access.
Identity Providers
We can use it with different credential, e.g. we use gmail credential to login in AWS etc.
Account Settings:
Here we have like password policy for IAM users
Credential Report
We can download a report for audit purpose, like how many user are there, and policy applied on those ids.
AWS cli configuration
On Windows
On linux
S3- simple storage service
We use it for backup, archiving, logs data backup, and its low cost storage.
S3
Data in bucket, is secure, SLA available 99.99999%
size is unlimited for storage.
User case:
We have prod server, after sometime we need to move data as backup from the main server.
S3 can use for versioning also, like sometime we keep versioning for data protection etc.
S3 can use for hosting static website as well, we cant host dynamic website on S3, for dynamic website need database etc.
S3 can use for redirection also.
S3 Transfer acceleration
Use case: when we need to upload a file on S3 or need to download a file faster we can use Transfer acceleration. AWS will charge extra cost for this, but we can enable it when we needed.
S3 Requester Pays
Use case: amazon charge for downloading data from S3, not on uploading data on S3, suppose we have 100GB data on our S3 bucket, we are giving right to download to another AWS account, if we not enable this option the charges need to pay by us, in case we enable this then requester AWS account will pay the charges for downloading from our S3 bucket.
S3 support file system
s3fs
How to mount S3 bucket on your server system etc?
Step 1: update all package.
yum update all
Step-2:- Install the dependencies.
-> In CentOS or Red Hat
sudo yum install automake fuse fuse-devel gcc-c++ git libcurl-devel libxml2-devel make openssl-devel
Step-3:- Clone s3fs source code from git.
git clone https://github.com/s3fs-fuse/s3fs-fuse.git
Step-4:- Now change to source code directory, and compile and install the code with the following commands:
cd s3fs-fuse
./autogen.sh
./configure --prefix=/usr --with-openssl
make
sudo make install
Step-5:- Use below command to check where s3fs command is placed in O.S. It will also tell you the installation is ok.
which s3fs
Step-6:- need access key/secret key id.
Step-7 :- Create a new file in /etc with the name passwd-s3fs and Paste the access key and secret key in the below format .
touch /etc/passwd-s3fs
vim /etc/passwd-s3fs
Your_accesskey:Your_secretkey
Step-8:- Change the permission of file
sudo chmod 640 /etc/passwd-s3fs
Step-9:- Now create a directory or provide the path of an existing directory and mount S3bucket in it.
s3fs awscloudindia2019 /mys3bucket/ -o passwd_file=/etc/passwd-s3fs
df -h
S3 classes
- S3-Standard
- Inteligent-Tiering
- S3-Standard-IA
- One-Zone-IA
- Glacier
- Deep Glacier
S3 Standard
when data upload, or object upload 1st time it will upload in standard, and it replicate 3 copy in different zone.
frequent data access possible.
S3-Standard-IA
it replicate 3 copy in different zone.
frequent data access possible.
One-Zone-IA
It will keep one copy only of data in same zone.
frequent data access possible.
when we keeping data 2ndary backup, like primary one we keeped in Standard one, and 2ndary we are taking in One-zone.
Glacier
it replicate 3 copy in different zone.
Condition
- Resources are not available
- if Resources are available
- data needed in hours? 3-5hours, 4-5hours, 5-12 hours with different pricing.
- frequent data access not possible.
Inteligent-Tiering
Its smart Storage, if data access frequently it will charge as per stardard one, if data access is not frequent it will consider as S3-Standard-IA.
S3 object/bucket Lifecycle rule
Management- Add lifecycle rule - Add filter to limit scope (Object or left black for full bucket)
Storage Class transition (Current version or Previous Vers.) - Days after creation - 30 day, the it will move to next Storage like Standard IA
+Add transition
Here we can define, after 30 days where data should move, we can define whole lifecycle.
Expiration
Define when data should delete from the S3 storage.
S3 Cross Region Replication
DR(Disaster recovery solution)
If you have S3 bucket, you can create a solution for DR Cross Region replication of S3 bucket, menas, replicate your bucket in another region.
How to implement S3 cross region replication of a bucket.
Creat Source and dest. bucket, enable vers. in both bucket.
Select source bucket - Managment - Replication - Set source - Set dest.(same or another a/c) - create role
How to provide S3 resource access through cross account access?
Create user - provide programmatic access etc. - Attach policy - S3FullAccess
Go to resource account - Create bucket - upload few objects.
now go to Bucket policy - police editor - create Json - define principal AWS(user arn from access account), resource arn from resource account - Done
VPC
Virtual Private cloud
Means we are using, AWS resources virtually, and we created our own cloud for private purpose.
VPC lab
Create 1 VPN, in the VPC,
Create 2 Network
Public
Private
Public connect to Internet, Using IGW
Private able to access internally.
Private Subnet connecting to internet using Net Gateway,
Net gatway created in Public network, and and Private network routed to NetGateway.
Check internet connectivity from Private, it able to ping google DNS, but from outside it will not able to access.
VPC Peering Lab
Create 2 VPC, in 2 Different account, AWS Dev and AWS Test
AWS Dev - VPC - A, Region Virginia - 10.100.0.0/16, Subnet - 10.100.0.0/24, Private IP- 10.100.0.179
AWS Test - VPC - B, Region Singapore - 10.200.0.0/16, Subnet - 10.200.0.0/24, Private IP- 10.200.0.81
Create IGW, for both VPC,
Now start VPC Peering, and add the route table both side for subnet entries.
Site to Site VPN/connection
connect AWS site to on premises site
- We need 2 gateway one on AWS side that is called Virtual Private Gateway.
- another Gateway need on premises side, that is called Customer Gateway.
- And we create a tunnel in b/w AWS and on premises using VPN connectoin
For this lab we create 2 VPC, both on AWS, and one we consider on premises site, and another one understand AWS site.
- Create 2 VPC in different region, create 1 instance in both VPC.
- then create Virtual Private Gateways for AWS side.
- And define Customer Gateway with customer Public IP.
- Then create Site to Site VPN connection(creating a tunnel) - in Site to site connection also need to define after connection with on premises in which network it should connect, so you can provide on premises Private network(eg 10.200.0.0/16)
Tunnel Options
Tunnel 1 / Tunnel 2, at a time only 1 tunnel will work, we can also say it active backup. (it will create AWS itself) - Done
Now go to AWS site side.
do the routing in route table there. or there is one more option route Propagation, if you select that, if tunnel comes up, it will do the entry in route table automatically.
Come again to Site to site connection, there is option Download configuration, so click on it, and send the information to on prem. team.
In Download configuration, you select confg. according to your gateway on prem side.
In our case we are using AWS as on prem, we download Generic config.
Now go to on prem. system,
Download package openswan
vim /etc/ipsec.conf
include /etc/ipsec.d/*.conf uncomment it
enter few parameter in below file
vim /etc/sysctl.conf
net.ipv4.ip_forward = 1
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.all.send_redirects = 0
vim /etc/ipsec.d/aws-vpn.conf
conn Tunnell
authby=secret
auto=start
left=%defaultroute
leftid=3.89.62.46
right=13.127.4.210
type=tunnel
ikelifetime=8h
keylife=1h
phase2alg=aea128-sha1;modq1024
keyingtries=%forever
keyexchange=ike
leftsubnet=10.200.0.0/16
rightsubnet=10.100.0.0/16
dpddelay=10
dpdtimeout=30
dpdaction=restart_by_peer
leftid=3.89.62.46 put customer Gateway ip
right=13.127.4.210 put virtual private gateway ip
leftsubnet=10.200.0.0/16 on prem side subnet
rightsubnet=10.100.0.0/16 AWS side subnet
vim /etc/ipsec.d/aws-vpn-secrets
3.89.62.46(customer gateway) 13.127.4.210(virtual private gat): PSK "pre-shared key from conf file"
chkconfig ipsec on
service ipsec start
service ipsec status
Now check in Site to site VPN connection Tunnel should be up, and check in route table of aws side, entry should be visible automattically.
Client VPN endpoint
Its like when you are connecting to your company network from home through VPN connection.
Lab:
Go to Client VPN endpoint- it will ask for certificate.
step 1: create certificate files.
Create certificate through script with required changes like domain name, location address etc.
create certificate - you will get files like below.
CA.crt
CA.key
CA.srl
openssl.cnf
sanjaydahiya.com.crt
sanjaydahiya.com.csr
sanjaydahiya.com.key
cat sanjaydahiya.com.crt
copy
Step 2. create certi. from certificate manager
Then go to Certicate Manager - Import a certificate / or create certificate
Copy in
Certificate body - sanjaydahiya.com.crt ,
Certificate private key - sanjaydahiya.com.key,
Certificate chain - CA.crt
step 3. configure client VPN endpoint
Go again - Client VPN Endpoints
create client VPN endpoint
Name Tag:
Client IPv4 CIDR: 10.125.0.0/22
Authentication Info.
Server certificate ARN
Client certifcate ARN
Connection Logging: NO
Other optional parameter
DNS server 1 IP: 172.31.0.2
DNS server 2 IP: 8.8.8.8
VPC/Subnet /
security group /
Add auth rule -0.0.0.0/0
Create route - route dest. - 0.0.0.0/0
Step: 4 Download vpn config file and upload to your system & connect.
go to Client VPN endpoint - Download client configuration
update <cert> <key> in client conf. file.
import config. file in client system openvpn - connect.
Now check if you able to ping AWS private ip of same subnet. and you also able to see you got private ip from the same subnet.
Client VPN
Lab -2
step: 1
create 1 VPC, 2 subnet,
1 Private
1 Public
install 1 EC2 instance in private subnet.
install openvpn instance in public subnet.
step2:
connect openvpn server
login as openvpnas - yes enter....
URL - https://18.xxx
set passwd for openvpn user.
URL - https://18.xxx
login- openvpn
pass: - xxxx
step: 4
Download OpenVPN connect app
For Window
For Mac OS
For Linux
step: 5
Connect Open VPN on your system.
Now try to ping other private instance IP,
VPC Endpoint Lab
Use case:
we need to access S3 bucket from your laptop via aws cli.
login on aws cli in same region
aws s3 ls
aws ls s3://sanjaydahiya2019
So our laptop able to access the bucket through internet here.
In lab we need to access S3 bucket with our private subnet instance internally.
we achive it by putting VPC-endpoint device internally.
Create 1 VPC, 2 subnet, Public, Private, assign 1 EC2 each, login in private EC2 via public EC2. now try to access S3 bucket, so you will not able to do.
Create VPC-endpoint.
Now try, you will able to access.
benfit:
1. connect with S3 bucket with low latency.
2. data transfer within in AWS enviorment, not using internet, so it will secure also.
3. also cost effective, if something download/upload using internet it will costly, but when we do within AWS its cost effective.
Q. if you have on-prem network, you need to access AWS network, and you dont have internet access in your on-prem network, then how you do it?
Sol.We connect AWS and on prem with VPN or Direct connect.
Security Group:
Its a kind of firewall, it will connect just before EC2 instance, there is 2 rule in it, Inbound and outbound.
By default Inbound is None,
Outbound have all traffice Allow.
ACL
Like security group work on EC2, just before EC2 instance.
ACL work on subnet, its also a kind of firewall.
security group - its a statfull firewall - its check one side rule only.
Subnet ACL - its a stateless firewall - its check both side rule(inbound & outbound).
We have server side port 1-1024
and we have client side port 1025-65535
So when we open inbound port in ACL, suppose 22, then for outbound if we open same 22 port, it wouldn't work, because, port will be random for outbound from the range of 1025-65535, so we need to open that port range.
ENI - Elastic Network Interface
Suppose you have a EC2, by default there is 1 Ethernet card there, but you need to attach another Ethernet card to EC2 machine.
How to Add ENI,
Go to Network Interfaces - Create Interface - attach to vpc / subnet - EC2 machine.
Note: In production, we never run Eth0 with public internet connection, if in case if eth0 goes down, so we cant recover same eth0 interface, but if running additional eth card, we can attach with any machine.
AWS IP address
Private ip address
Public ip address
Elastic ip address = In AWS you can say its Static IP
- By default a EC2 get ip that is public & private, and when EC2 start and stop public ip can change.
- But if you reboot EC2 ip will not change.
- So for persist Public ip, we use Elastic IP only.
- AWS have rule, if Elastic ip in use, they will not charge, if you left ip unused, like when stop EC2 instance, or deleted EC2 instance, and did not release Elastic ip, they will charge for that period.
Elastic Load Balancer
Lab 1
- Create 4 EC2 instance,
- Name the EC2 machines - image1, image2, software1, softwar2.
- Add Load Balancer - There 3 type, Applicatoin, Network, Classic.
- Select Application (Http & Https) -
- Create Target group, security group - assign all 4 EC2 in target group.
- complete Load Balancer - test Load balancer - copy DNS name - hit in browser, each time it will show different EC2 index page.
In AWS load balancer we usually we get DNS name(A Record)- this A Record we can save copy paste to our hosted zone in AWS Route53 or go-daddy wherever we hosted our domain.
Health Check -
Its a parameter in Target groups
When any EC2 instance goes down it will do unhealthy, and its possible it will route traffic to that server until health checker thrashold not reach, it keep forward traffic to that instance, once it declard as unhealthy, it will stop send traffic there.
Connection draining- Deregistration Delay:
- Suppose if we unregister any instance from the target group.
- by default it will keep serving existing traffic on that instance for 300sec, and not serve new traffic.
AWS Certificate Manger
Concept of cert.
In old way.
We create cert on our system, we send for ca sign, and revert cert we deploy on our server.
If we using AWS certicate, we can use on AWS only, AWS certicate manager is a service
While creating certificate, we should have our domain, or exchange server email.
Step1
select your domain
step2
select validation method
DNS validation -
Email Validation - you should have exchange server.
step3
Review confirm, then copy CNAME- value, and copy to your DNS server
Go to your DNS server, create a record set and save.
It will take 2-3 minutes go to AWS certicate manager and pending state should be now active, and you can download your certificate.
Note: If some running external load balancer must be using some domain.
EBL listener
In Load balancer we always have a listener what its do, it will check url content and forward accordingly to target group
with every load balancer by default 1 default target group create itself.
Use case:
If we forward traffic to different server on the basis of url content, e.g. url - image1.sanjaydahiy.com or software1.sanjaydahiya.com it should route the traffic according to the server.
So we can achive it by Listener
Lab.
- create target group with e.g. image, software
- assign EC2 in it.
- add in domain DNS record also. e.g. www.software.sanjaydahiya.com etc.
- go to load balancer go to listener - add rul.
How to check loadbalancer logs?
Lab
create s3 bucket,
go to Load balancer Description - Attribute - enable logs-
and attach to LB,
hit the url, check after 3-4 minutes, access logs should be available in S3 bucket.
Auto Scaling
Auto scaling of EC2,
Auto scale in,
Auto scale out.
- Example you have load balancing form Load balancer.
- you have 4 VM in your env. 25% load each, and you dont have auto scaling, so it might be your resources under utilization, or if you have load, it could be possible over utilization.
- You cant monitor it manually, and launch provision and de-commission EC2 manually all the time.
Benifit:
Cost cutting possible.
Resource proper utlization.
critical application can handle properly.
Auto scalling steps
step1
create EC2 application/configure
do proper testing if working fine, go next
step 2
create image
step 3
Launch configuration
1. image
2. instance type
3. HDD
4. Key pair
5. Security grp.
6. TAG etc.
step 4
Auto scaling group
its actually nos of ec2 instance
Min instane value - min instance possible when load is min. at a time.
Max. instance value - Max instance possible when load increase.
Desired instance value - at a time instance running actually according to load.
e.g. min-2, max-20, desired - 5 running currently.
step 5
EC2 provisioning & deprovisioning
scaling plan
- Dynamic scaling - e.g If condition applied, %cpu goes above 75% it add EC2, if %cpu goes less then 25% EC2 goes down.
- Manual scaling - e.g. not waiting for any condition manually increase/decrease EC2s.
- Scheduled scaling - e.g.(eve7pm to 9pm) scheduled for specific time, and time period.
Auto scaling is possible even 1 EC2 also. - auto scaling do health check also - e.g. you are running a application in single EC2, and its in auto-scaling, if condition match, it will take action accordingly, it will take time whatever time to launch new instance.
If you don't want downtime, you can run Loadbalancer, and run multiple VM behind it. if one goes down, another will take-over.
Cloud Watch
Its a monitoring tool
We can set monitoring like CPU 70% and more or 25% and less - set alarm - action - SNS(send email) / Auto scaling / Lambada / Ansible / Python(script)
Detail monitoring 1 min(its paid)
EC2 monitoring
ram-util
ram-avail
hdd-used
hdd-avail
apache server
nfs server
Data available in Cloud watch max 15 months, if you want to keep more than 15 months you can store on S3 bucket.
how to create logs agent? for access any application logs on cloudwatch?
step
install awslog agent
yum install awslog* -y
systemctl start awslogsd
systemctl enable awslogsd
step
cd /etc/awslogs
ls
awscli.conf
awslogs.conf
config/
proxy.conf
Note: make sure in awscli.conf region must be same as EC2 instance launched, and enable application logs, and other logs via awslogs.conf
step
create a role
cloudWatchLogsFullAccess
step
attach to EC2
check in cloudwatch if logs is there in logs section if not, can restart once awslogs service, For debug you can check /var/log/awslogs.log
cloudwatch events
In cloudwatch we can monitor events also, eg if EC2 start/stop etc we call or action - call to SNS, lambada function etc.
Create a alarm on billing.
go to billing, - Billing preference - manage billing aleart - create alreat, select metrix - Total Estimated Charge -Done.
AWS database Service
RDS - Relational database services
Its same like AWS provide EC2, they provide rds service
RDS instance
free 750 - db.t2.micro
20 GB - free
20 GB - backup
backup process management
hardware maintances
1. mariadb
2. oracle
3. postgresql
4. Amzone aurora
5. mysql
6. sql server
RDS Instance type deployments
Single A/Z
Multi A/Z
Read Replica RDS Instance - its instance, where we keep read replica seprate from other write replica, for fast response, and its a copy of your primary replica, we can also define like if any read query comes, we can forward to read replica
another advantage of read replica, you can put it on cross region also
e.g. if we have so many customer for california for read purpose only, so we have put our read replica in that zone, and in case if need to write from there, then they comes to our primary replica
Create Single A/Z database
step follow all instruction,
once database create access on your linux machine with -u cmd and passwd.
but before this you should have client on your system
mysql -u admin -p -h database-1.cprujwifphby.us-east-1.rds.amazonaws.com
If you are not able to connect database from EC2, please check below things
Port is open on security group,
what security VPC you assigned to DB.
Create Multi A/Z database
Create Subnet group
Add 2 subnet with different Avail Zone.
add security group, vpc etc.
you will able to access it with endpoint.
Create Read replica.
It will be in read mode only,
Lab,
create multi AZ database,
create read replica of same database,
then create a database in multi AZ
after that go to read replica and chek if you able to see that DB.
if yes, means replication is happening.
now try to create a database in read replia database, you should get a error about --read-only option.
Route 53
Domain registration - you register your domain like on go-daddy.
Domain hosting - normally hosting in companies happen on their end, they purchase domain, and hosts on their local server etc.
AWS giving hosting service for public and private both, go-daddy gives only public hosting only
Route 53 100% SLA,
routing policy
simple routing : when we have routing to 2 point, its doing load balancing, it will send 1 traffic to 1st point 2nd traffic to 2nd point.
multi value routing:
if a user hit www.sanjaydahiya.com or admin.sanjaydayhia.com it will send to same point, or you can multiple point, but you have also created sub-domain or customize domain and then you pointed.
weighted routing policy: e.g it will send traffic 75% and 25% ratio
latency routing policy :
it will check latency 1st, according to latency traffic will transfer.
It will check where latency is less and send traffic accordingly
geo location routing:
it will send traffic according to location, doesnt check latency also.
failover routing
if once fails then transfer to another, primary secondary we define here.
Traffic policy
Its a graphic flow digram tool, where we can define routing policy with drag drop and click
cloud front Distribution
CDN-Content delivry network
edge server on diff - diff location, cache server
origin - ELB, direct webserver, S3 bucket
cloud front - save static content on edge server, only dynamic content it will go and access from main source
benfit of cloud front
out of country - contries get faster access.
also have web access like firewall rule, you can deny some contries if you want.
lab
create ec2 install httpd service,
create a load balancer, attach node in target group.
create cloud front on load balancer, - select load balancer as origion
Also select Price class all edge locations.
Elastic Beanstalk
Its useful for developer, when they want to create a env for testing purpose, no need to depend on AWS admin, they can create their own testing env from here.
cloud formation
Its tool for infra- creation. we can write a templete and create whole infr as code.
Template sections
1. AWS Template format version
2. Description
3. Metadata
4. Parameters
5. Mapping
6. Conditions
7. Outputs
8. Resources
1 to 7 are optionals
8 is required
................................
Sample:
{
"Resources": {
"FirstEc2Instance" : {
"Type": "AWS::EC2::Instance",
"Properties": {
"ImageId": "ami-0a28a1ad6f52a7332"
"SubnetId": "subnet-09f23dbbeac46d4d3"
}
}
}
}
.........................................
{
"Resources": {
"FirstEc2Instance" : {
"Type": "AWS::EC2::Instance",
"Properties": {
"ImageId": "ami-0220e36289ed46870",
"AvailabilityZone" : "us-east-1d",
"SubnetId": "subnet-0f3d184f90a377e32",
"InstanceType" : "t2.medium",
"KeyName" : "AWS_Virginia",
"SecurityGroupIds" : ["sg-0b21d2ea0fb853832"],
"Tags" : [
{
"Key": "Name",
"Value": "Test-Server"
}
]
}
}
}
}
..............................................
De-provisioning, when we deploy something from CloudFormation, if we delete that it will De-provision, means it will delete whole stack.
Note: we can enable termination protection, for protection from by mistake deletion.
migrate on premises vm to aws
1. Download AWS CLI
2. Install AWS CLI
3. Get Access Keys to configure AWS CLI
4. Test Your Access
5. create role name should be vmimport with relevent permission - administrator or vmexportimport
6. Export The Virtual Machine -
7. create bucket and upload image in side s3 bucket
8. Import The Virtual Machine
9. Access The Virtual Machine
10. Test
aws ec2 import-image --description "On-premises-Redhat-VM" --disk-containers
aws ec2 describe-import-image-tasks --import-task-ids import-ami-abcd1234
Launch instance by python script
1st install aws cli,
connect aws cli.
run below python scripts from python shell
#For start EC2 machine
import boto3
client = boto3.client('ec2')
client.start_instances(InstaneIds=[''])
#For stop EC2 machine
import boto3
client = boto3.client('ec2')
client.stop_instances(InstaneIds=[''])
#Launch EC2 machine
import boto3
client = boto3.client('ec2')
resp = client.run_instances(ImageId)='',
InstanceType='t2.micro',
MinCount=1,
MaxCount=1)
for instance in resp['Instances']:
print(instance['InstanceId'])
Host a website, with below requirement, and create a infra diagram.
Make sure your infra considers below points:
- Make sure your setup is highly available.
- Make sure your setup is secure and is not expose to outer world.
Sol. Create VPC
Create 4 subnet
2 subnet for HA for instance,
2 subnet for ELB corrspondance to Instance AZ.
Put instance in private subnet
Put ELB in public subnet.
create LB with ELB subnet and security group.
check your connectivity of your website.
You can bind with domain also with the help of Route 53
AWS CloudTrail
AWS CloudTrail is a log of every single API call that has taken place inside your Amazon environment. Each call is considered an event and is written in batches to an S3 bucket. These Cloudtrail events show us details of the request, the response, the identity of the user making the request, and whether the API calls came from the AWS Console, CLI, some third-party application or other AWS Service.
AWS Backup
You should use AWS Backup to manage and monitor backups across the AWS services you use, including EBS volumes, from a single place.
Lifecycle Manager
Amazon Data Lifecycle Management (DLM), its manage EBS snapshots.
DLM provides a simple way to manage the lifecycle of EBS resources.
such as volume snapshots
You should use DLM when you want to automate the creation, retention, and deletion of EBS snapshots.
How does AWS Backup relate to Amazon Data Lifecycle Manager and when should I use one over the other?
A: Amazon Data Lifecycle Management (DLM) policies and backup plans created in AWS Backup work independently from each other and provide two ways to manage EBS snapshots. DLM provides a simple way to manage the lifecycle of EBS resources, such as volume snapshots. You should use DLM when you want to automate the creation, retention, and deletion of EBS snapshots. You should use AWS Backup to manage and monitor backups across the AWS services you use, including EBS volumes, from a single place.
Network ACL
Its is extra layer of security that is protect your VPC.
its acts as firewall for controlling traffic in and out.
Inbound
Outbound - by default rule 100 allow for both.
we need to attach subnet (Subnet associations)
2 Tier architecture & 3 Tier architecture
In 2 tier architecture we have only client and server/database.
In 3 tier architecture we have client, business layer and data base server




































































No comments:
Post a Comment