Wednesday, February 16, 2022

TerraForm

Terraform


terraform cmd

apply -     Builds or changes infra.
console - Interactive console for Terraform interpolations
destroy - Destroy Terraform-managed infra.
env -        Enviroment management(depreciated) now using workspace
fmt -        Rewrites config files to canonical format
get -        Download and install modules for the configuration
graph -   Create a visual graph of Terraform resources
import -  import existing infra into terraform
init -        initialize a new or existing terraform configuration
output -  Read an output from a state file
plan -      Generate and show an execution plan
push -     Upload this terraform module to atlas to run
refresh - update local state file against real resources
show -     inspect terraform state or plan
taint -     manually mark a resource for recreation
untaint - manually unmarked a resource as tainted
validate - validate the terraform files
version - prints the terraform version

All other commands:
debug - Debug output management(experimental)
force-unlock - manually unlock the terraform state


terraform init
download provider plugins in   .terraform folder

terraform plan
it will work like dry run, it will show what will be change in real time if you use apply cmd

terraform validate
if any syntax error in code it will show

terraform fmt main.tf
if in your terraform file like main.tf file have indentation issue, it will fix that

terraform providers
it will show provider list

terraform show
it will read terraform state file and show easy to read format, that you will understand very easliy


terraform refresh
if any change happen in cloud and you want to update your tfstate file with those changes,
user referesh cmd

terraform graph > test.dot
it will create graph file from our .tf file and you can open this .dot file in gvedit


terraform env/workspace
it manage terraform state file for diff env,
exam: when you using multi env it will maintain .tfstate file for each env
      like for dev, uat and prod
terraform workspace list
it will show all workspace list
terraform workspace new dev
it will create new workspace
terraform workspace show
it will show your current workspace
terraform workspace select dev
switch workspace dev
basically it will create tfstate file under workspace name folder under terraform.tfstate.d 

 terraform taint  / untaint
for resource recreate
exam: we have requirment where we need to recreate a perticular resource
terraform taint resource_name.identical_name
terraform taint aws_intance.web_server
so in next plan or apply this resource will recreate

terraform login /logout
will be able to login on terraform cloud,
there we can maintain our state file, and workspace
you can login after you create account and generate a token on terraform cloud console app.terraform.io
and using that token you can login to terraform cli

 

Current state
current state means whatever, your infra create after you apply.
Desire state
Terraform primarily function is to create, modify and destroy infra. resource to match the desire state which is described in terraform config.

Terraform debugging and validation
log level
Debug
Trace
Info
Warn

TF_LOG = DEBUG/INFO/WARN/TRACE    log level
TF_LOG_PATH = <LOG_FILE_PATH>

Set env variable for this
export TF_LOG=DEBUG
echo $TF_LOG
export TF_LOG_PATH="/Users/rahulwagh/Documents/log/debug.log"

Block
validation {
  condition = can(regex("^[Tt][2-3].nano|micro|small"),var.instance_type))
  error_message = "Invalid Instance Type name. You can only choose - t2.nano,t2.micro,t2.small"
}


https://www.youtube.com/watch?v=rcRYCOkfPpg&list=PLhSvfWqpahw0bmYHa4iXGDDCrKB_yYETm&index=7

Orchestration Management Tool


 You can also get a scenario as below:

Here you write a code and you push into git, you have already jenkins integrated with git, job will run on jenkins, and it will deploy infra on AWS.


or Another case also possible where you running direct Terraform with AWS.

Local variable

Local variable is concept when you using similar tag for multiple resource you can use it as local variable. 

And you can call it by reference whenever you need it.

# declaring the variable
locals {
common_tag = {
Name = "UK-Project"
Owner = "Sanjay Dahiya"
}
}
 
#calling below: 
resource "aws_ebs_volume" "example-UK" {
availability_zone = "us-east-2"
size = 40
tags = local.common_tag
 

Count Paramenter

Count parameter, example when you need to create multiple instance you can use count parameter

 
# instance creation 
# instance creation with loop
resource "aws_instance" "test-dev" {
ami = var.ami_image
instance_type = var.instance_type[count.index]
count = 3
tags = {
Name = var.instance_tag[count.index]
}
}

if in case you need to keep different name of your EC2 each created instance with count parameter. you can use list.

variable "ami_image" {
default = "ami-03a0c45ebc70f98ea"
}

variable "instance_tag" {
type = list
default = ["dev-dep","test-dep","prod-dep"]
}

variable "instance_type" {
type = list
default = ["t2.micro","t2.nano","t2.small"]
}

# instance creation with loop
resource "aws_instance" "test-dev" {
ami = var.ami_image
instance_type = var.instance_type[count.index]
count = 3
tags = {
Name = var.instance_tag[count.index]
}
}
..................................................................

Condition statement 

 resource "aws_instance" "dev" {
ami = var.ami_image[0]
instance_type = var.instance_type["dev"]
#count = var.input == "dev" ? 1 : 0 if resource value is dev it will add 1 instance
#count = var.input > 2 ? 2 : 0 # if resource value is greater then 2 it will add 2 instance
#count = var.input >= 2 ? 2 : 0 # if resource value is greater then 2 it will add 2 instance
count = var.input != 2 ? 2 : 0 # if resource value is not equal to 2 then it will add 2 instance

tags = {
Name = "Dev-Department"
}
}
 
 
Output declarations
how to check terraform output by cmd below:
terraform output instance-arn
terraform output db_password

full project link for terraform with output file example
https://github.com/hashicorp/learn-terraform-outputs


 

terraform state store in terraform.tfstate, its store in json format.                    

What is terraform.tfstate.backup file in terraform?

By default, a backup of your state file is written to terraform.tfstate.backup in case the state file is lost or corrupted to simplify recovery.

 


 if multiple people working on a project how you resolve the below issue.

1. creating resource on different time, if using local repo

2. creating resource on different time, if using common share folder.

3. creating resource on different time, using git hub.

4. creating resource even using some versioning method, but still forget to push pull tfstate file on versioning platform.

5. creating resource, using like S3 bucket also, bt still have issue regarding when almost all people trigger a resource creation at same time, some lock feature is required.

Solution. - use somekind of locking resource on versioning like on S3 we use dynamodb locking feature.


In case you have requirement where you need to keep 2 env, Prod and staging/uat

so you create 2 folder on your local stage/uat and prod, and for each folder you map your .tfstate to the S3 bucket with versioning and dynamodb for locking feature, 

In case you have requirement where you want to keep 2 env but you have only 1 S3 bucket and Dynamodb for locking.

Sol. you use workspace method.


 Terraform State

Whenever terraform apply, it will match .tf file with terraform.tfstate, if something changed in .tf file it will create that resource and update terraform.tfstate file.

but before update terraform.tfstate it will create backup terraform.tfstate.backup 

How to store terraform.tfstate on AWS s3?

provider "aws" {
region = "us-east-2"
access_key = "AKIA3JSXUWW67XZCLVP4"
secret_key = "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"

}

# store terraform.tfstate file store in s3 bucket
terraform {
backend "s3" {
bucket = "cloudproject1111"
key = "cloud/project"
region = "us-east-2"
access_key = "xxxxxxxxxxxxxxxxxxxxxxxxxx"
secret_key = "yGdaxxxxxxxxxxxxxxxxxxxxxxxx238"
dynamodb_table = "cloudproject" #for locking feature
}
}

resource "aws_instance" "web" {
ami = "ami-03a0c45ebc70f98ea"
instance_type = "t2.micro"
tags = {
Name = "web"
}
}

resource "aws_instance" "db" {
ami = "ami-03a0c45ebc70f98ea"
instance_type = "t2.micro"
tags = {
Name = "db"
}
}
 
In above example if we want to lock feature also,
so 2 people cant run terraform cmd at same time,
we can use dynamodb as locking function.
How to work in multiple env using Workspace S3 method? 
terraform workspace list        list all workspace
terraform workspace show        show current workspace
terraform workspace --help

create
terraform workspace new prod    
terraform workspace new stage

switch workspace
terraform workspace select prod
Delete Workspace
terraform workspace delete stage
 
Lab 
create .tf file
create 2 workspace.
run terraform cmd from 1st workspace and run terraform from 2nd workspace, 
so it i will create 1 common bucket and under folder there will be 2 sub-folder 1st one for Prod, 2nd for Dev env.
it will store .tfstate in respective folder.

Terraform Provisioner


Local-exec Provisioner

Use case
when EC2 machine already created, now we want to gather info. from the machine we can use local-exce provisioner.

Remote-exce Provisioner
It will run on Remote system, means whatever resource we are creating from the resource, it will run on AWS remote resource

File Provisioner
when you want to copy a file from local to remote machine

Can we copy a fil on remote machine any dir like /etc?
No, you can only copy file to remote /tmp
and you can use remote-exce to cp from /tmp to /etc.

how connection made b/w remote machine to your base machine?
winrm
ssh
and you will use connection parameter
connection ={
 user = ec2-user
 type = ssh/winrm
 private = /path
}

Vendor Provisioner
Terraform support below vendor provisioner
Chef
Puppet
Habitat
Salt states

Packer
Packer Use for create system image.

Packer basically use to create system image for multiple platform from single source of configuration file.

User case

when we have 5 different platform, and we want to create image on each platform.
We can user common Packer to create image for each platform.
So we can write single config file to create image for any platform.

Packer can write in 2 format
Json and HCL

When packer create image it will create 1 instance and configure according to packer file.

Packer Section
Builders - Target(AWS,Azure, etc), sourceimage(already OS_image, like AWS ami)
Provisioners - config inside image( like apache2 install, app config, hardening, s/w install)

Post Process -

And after creating image on cloud it will destroy the instance.


In below Scenario, when code push to GIT, CI-CD from jenkins and
it will trigger a Cloud Watch, and cloud watch action taken,
It will Update AMI and so here it can Edit or create Launch configuration.


 packer build packer.json

{
  "builders": [
    {
      "type": "amazon-ebs",
      "access_key": "AXXXXXXXXXXXXXX4",
      "secret_key": "yGdXXXXXXXXXXXXXXXXXXXXXXX38",
      "region": "us-east-2",
      "source_ami": "ami-03a0c45ebc70f98ea",
      "instance_type": "t2.micro",
      "ssh_username": "ubuntu",
      "ami_name": "cloud_ami"
    }
  ]
}

to convert json to hcl
packer hcl2_upgrade packer.json

 variable.json
{
  "aws_access_key": "AKxxxxxxxxxxxxxxxxxxxxxP4",
  "aws_secret_key": "yGdxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx38"
}

 packer.json

{
  "variables": {
    "access_key": "",
    "secret_key": ""
  },
  "builders": [
    {
      "type": "amazon-ebs",
      "access_key": "{{user `aws_access_key`}}",
      "secret_key": "{{user `aws_secret_key`}}",
      "region": "us-east-2",
      "source_ami": "ami-03a0c45ebc70f98ea",
      "instance_type": "t2.micro",
      "ssh_username": "ubuntu",
      "ami_name": "cloud_ami"
    }
  ]

cmd  for running packer with variable file

packer build -var-file=variable.json packer.json

Priority of variable,
1st in builder
2nd in variables define in packer file
3rd in variable file
 

We can also set ENV variable for access_key and secret_key
export AWS_ACCESS=AXxxxxxxxxxxxxxxxxxC
export AWS_SECRET=ZxxxxxxxxxxXXXXXXXxxxxxxSG
echo $AWS_ACCESS
echo $AWS_SECRET

call env variable

{
  "variables": {
    "access_key": "{{env AWS_ACCESS}}",
    "secret_key": "{{env AWS_SECRET}}"
  },
  "builders": [
    {
      "type": "amazon-ebs",
      "access_key": "{{user `aws_access_key`}}",
      "secret_key": "{{user `aws_secret_key`}}",
      "region": "us-east-2",
      "source_ami": "ami-03a0c45ebc70f98ea",
      "instance_type": "t2.micro",
      "ssh_username": "ubuntu",
      "ami_name": "cloud_ami"
    }
  ]
}


if you want to set ENV permanent
add the variable in bashrc file

cmd for run packer with var in cmd line
packer build -var aws_access_key=AXxxxxxxxxxxxxxxxxxC -var aws_secret_key=ZxxxxxxxxxxXXXXXXXxxxxxxSG packer.json


If you dont want to use access and secret key you can use role,
assign a role to your EC2 machine from where you gonna run packer cmd
and it will execute the packer cmd without access key

packer privisioner

The file Packer provisioner uploads files to machines built by Packer.
Provisioners use builtin and third-party software to install and configure the machine image after booting.
Provisioners prepare the system for use, so common use cases for provisioners include:

    installing packages
    patching the kernel
    creating users
    downloading application code

Benifit of packer,
you do not need to create with source image after deploying on any physical server and after that manual config and then you use to create image manually. 

packer provisioner list
https://www.packer.io/docs/provisioners/file

Packer Provisioner

shell

file

ansible

-----------------------------------------
"provisioners": [
    {
      "type": "ansible",
      "user": "ubuntu",
      "playbook_file": "../play.yml"
    }
------------------------------------------
"provisioners": [
    {
      "type": "file",
      "source": "./diigo",
      "destination": "/tmp"
    },
    {
      "type": "shell",
      "inline": [
        "sudo apt update -y",
        "sudo apt install apache2 -y",
        "sudo cp -rvf /tmp/diigo/* /var/www/html",
        "sudo systemctl restart apache2"
      ]
    }

--------------------------------------------------------------------

Packer ansible-Local provisioner - it will run on EC2 machine directly for that we are creating image.
In case you using ansible-local provisioner, you must install ansible before using shell provisioner
 
Packer ansible-remote provisioner
- it will run on your base machine from where you executing
packer cmd, and it will execute playbook containt on remote machine.

Packer ansible - if not define local/remote in type, then it will remote provisioner consider.


 

IF you need to create image for multiple platform at once what you will do?
So you can call multiple builder in the file
e.g. for AWS, Azure etc.

Only
When you have multi builder and you want to run your provisioner in any specific builder you can use only in provisioner
"provisioners": [
    {
      "type": "shell",
      "only": ["test"],
      "inline": [
        "sudo apt update -y",
        "sudo apt install apache2 unzip -y",
        "sudo wget https://www.free-css.com/assets/files/free-css-templates/download/page276/transportz.zip",
        "sudo unzip transportz.zip",
        "sudo cp -rvf transportz/* /var/www/html",
        "sudo systemctl restart apache2"
      ]
    }
  ]
}

Override
Incase you want to override any one cmd in provisioner section you can use Override parameter.
Use case when you have 3 builder and in each builder image you want different passwd
you can use Override in parameter.

      "override": {
        "prod": {
          "inline": [
            "sudo apt update -y",
            "sudo apt install apache2 -y",
            "sudo cp -rvf /tmp/index.html /var/www/html",
            "sudo systemctl restart apache2",
            "sudo cd ..",
            "sudo cd .."
          ]
        }

Pause
In case if you need to pause before or after in provisioner section
e.g like some cmd need more time, or after system start need some time to pause we can use it.

After
"provisioners": [
    {
      "type": "shell",
      "pause_after": "30s",
      "inline": [
        "sudo sleep 30",
        "sudo apt update -y",
        "sudo apt install apache2 -y"
      ]
    }

]

Before
"provisioners": [
    {
      "type": "shell",
      "pause_before": "30s",
      "inline": [
        "sudo sleep 30",
        "sudo apt update -y",
        "sudo apt install apache2 -y"
      ]
    }

]


Max retries
by default it always 0, when we dont define
but if we defined we can set count
It only retries if get error in cmd,
if cmd get success 1st time it will not try again.

"provisioners": [
    {
      "type": "shell",
      "max_retries": "5",
      "inline": [
        "touch home"
      ]
    }
]


timeout
If any cmd did not complete in timeout time it will exit that cmd
e.g. if you have some cmd that need input and expacted time for execute is 1 min.
and its taking time more then that you can use timeout parameter.

"provisioners": [
    {
      "type": "shell",
      "timeout": "60s",
      "inline": [
        "sudo yes"
      ]
    }
]


what is Terraform init?
What is a Tainted Resource?
What is State File Locking?
how will you hide credential in terraform like AWS key etc?
how to create multiple env cloud(dev,prod etc)  with single tfstate file?
What Terraform commands are the most useful?
    Terraform init - Creates a new directory in the current directory.
    Terraform refresh - This command updates the state file.
    Terraform output - Views Outputs of Terraform
    Terraform apply - Executes the Terraform code and creates objects.
    Terraform Destroy - Destroys what Terraform has constructed.
    Terraform graph - Generates a graph in DOT format.
    Terraform plan - Tests how Terraform will perform.
How will you manage and regulate rollbacks if something goes wrong?

import

Import resource in terraform ?
import any resource that already created on cloud and that you need to sync on your local. use terraform import cmd.

1. import ec2 machine
cmd:
terraform import <resource_type>.<resource_name> <resource_id>

terraform import aws_instance.ec2_example i-1234


before run import cmd main.tf file
vi main.tf
provider aws {
  region = "eu-central-1"
  shared_credentials_files = ["/Users/rahulwagh/.aws/credentials"]
}

#terraform import aws_instance.ec2_example i-097f1ec37854d01c2

#terraform show
resource "aws_instance" "ec2_example" {

}

so above is before run terraform import cmd.
run terraform import cmd
terraform import aws_instance.ec2_example i-097f1ec37854d012

after succesfull import, now add resource in main.tf file


vi main.tf
provider aws {
  region = "eu-central-1"
  shared_credentials_files = ["/Users/rahulwagh/.aws/credentials"]
}

#terraform import aws_instance.ec2_example i-097f1ec37854d01c2

#terraform show
resource "aws_instance" "ec2_example" {
  ami        = "ami-06ce824c157700cd2"
  instance_type    = "t2.micro"
  tag     = {
     "Name" = "my-test-ec2"
 }
}

and now run
terraform plan - it should show no changes  and run terraform apply also it should also no change
terraform apply

2. import s3 bucket

cmd:
terraform import <resource_type>.<resource_name> <resource_id>

terraform import aws_s3_bucket_acl.my_test_bucket my-demo-jhooq-bucket

vi main.tf
provider aws {
  region = "eu-central-1"
  shared_credentials_files = ["/Users/rahulwagh/.aws/credentials"]
}

#terraform import aws_s3_bucket.my_test_bucket my-demo-jhooq-bucket

#terraform show
resource "aws_s3_bucket" "my_test_bucket" {

}

resource "aws_s3_bucket_acl" "example" {

}


so above is before run terraform import cmd.
run terraform import cmd
terraform import aws_s3_bucket.my_test_bucket my-demo-jhooq-bucket
terraform import aws_s3_bucket_acl.example my-demo-jhooq-bucket

after succesfull import, now add resource in main.tf file
vi main.tf
provider aws {
  region = "eu-central-1"
  shared_credentials_files = ["/Users/rahulwagh/.aws/credentials"]
}

#terraform import aws_s3_bucket.my_test_bucket my-demo-jhooq-bucket

#terraform show
resource "aws_s3_bucket" "my_test_bucket" {
  bucket    = "my-demo-jhooq-bucket"
  tag     = {
     "Name" = "test-bucket"
 }
}

#terraform import aws_s3_bucket_acl.example my-demo-jhooq-bucket
resource "aws_s3_bucket_acl" "example" {
  bucket = aws_s3_bucket.my_test_bucket.id
}

and now run
terraform plan - it should show no changes  and run terraform apply also it should also no change
terraform apply

Roll-back in terraform
we should have versioning enabled in terraform,
so we should go back to previouse version and role back the tfstate file and code.

Terraform state
It will create in your local workspace.
tfstate file contain all the metadata, which spacified in main.tf file

tfstate local - in your local
tfstate remote - in s3 bucket etc
how to create remote tfstate - we use backend block in main.tf file


how to pull changes from remote tfstate?
cmd-  terraform state pull

how to push terraform state file ?
cmd terraform state push, - but its not recommended


Roll-back in terraform
we should have versioning enabled in terraform,
so we should go back to previouse version and role back the tfstate file and code.

Terraform module

when you have multiple developer, and working on different piece, and you dont want to re-write the same code, and you want to utilize other piece, we can use module


Workspace

How to create a new terraform workspace
terraform workspace new dev
terraform workspace new test


cmds:

terraform workspace list  - list all workspace.

terraform workspace show - show active workspace

terraform workspace select test - switch workspace

terraform plan -var-file="dev.tfvars"  - planing with variable file - dev.tfvars.


  1. Terraform Configuration Files:

    • .tf files: These files contain the actual Terraform configuration code, including resource definitions, variables, outputs, etc. They typically have a .tf extension (e.g., main.tf, variables.tf, outputs.tf).
  2. Terraform State Files:

    • terraform.tfstate or terraform.tfstate.d: The state file(s) store the current state of your infrastructure. This file is essential for Terraform to track resources and manage changes.
  3. Terraform Lock File:

    • .terraform.lock.hcl: As discussed earlier, this file locks down the versions of providers and modules used in your configuration to ensure reproducible builds.
  4. Terraform Backend Configuration:

    • backend.tf: This file specifies the backend configuration for storing the Terraform state, such as AWS S3, Azure Blob Storage, etc.
  5. Variable Definitions:

    • variables.tf: This file contains variable declarations used throughout your Terraform configuration.
  6. Output Definitions:

    • outputs.tf: This file contains output definitions for values that are useful to interact with or reference outside of Terraform.
  7. Provider Configuration:

    • provider.tf: This file may contain provider configurations if you prefer to separate them from your main configuration.
  8. Module Definitions (if used):

    • modules/: If your configuration uses modules, you'll typically have a directory containing module definitions. Each module may have its own set of configuration files.
  9. Environment-specific Configuration:

    • dev.tfvars, prod.tfvars, etc.: These files contain environment-specific variable values. They are typically used with the -var-file flag during terraform apply or terraform plan.
  10. Terraform CLI Configuration:

  • terraform.rc, .terraformrc, or .terraformrc.json: Configuration files that specify settings for the Terraform CLI, such as plugin directories, CLI behavior, etc.


what is dynamic block?
code repeat issue - suppose we need to open port 22 443 and so on.

In Terraform, the locals block within the main.tf file allows you to define local values that can be reused throughout your configuration. These local values can be expressions, strings, lists, maps, or any other valid Terraform value type. You might use the locals block for various reasons:

local {
  ingress_rules = [{
    port    = 443
    description = "Ingress rules for port 443"
  },
  {
     port    = 22
     description= "Ingress rules for port 22"
  }]
}

dynamic "ingress" {
   for_each = local.ingress_rules
   
   content {
     description = ingress.value.description
     from_port     = ingress.value.port
     to_port     = ingress.value.port
     protocol    = "tcp"
     cidr_blocks = ["0.0.0.0/0"]
 }
}

what is variable.tf
what is terraform.tfvars?
Handle multiple terraform.tfvars?
How to set variable value using cmd line var ?

creating of variable in main.tf
or
creating of variable in variable.tf

 

what is variable.tf
what is terraform.tfvars?
Handle multiple terraform.tfvars?
How to set variable value using cmd line var ?

creating of variable in main.tf
or
creating of variable in variable.tf

when we create terraform.tfvars, we only declear variable in variable.tf, and actual value we put in terraform.tfvars

and why we need terraform.tfvars file ? - when we have multiple env. staging prod etc.

 

Output block
when you want to debug your terraform code values. it help to print the attributes reference(arn,instance_state,output_arn,public_ip)

output "my_console_output" {
   value = aws_instance.ec2_example.public_ip
   sensitive = true
}
 

Data sources
when you want to fetch some info. from your resource

 

Data Sources:

A data source in Terraform allows you to fetch information from an external system or provider and use that data within your configuration. Data sources are read-only and provide dynamic information that Terraform uses during the planning phase. Common use cases for data sources include fetching information about existing infrastructure components, retrieving configuration details from external systems like AWS, Azure, or GCP, or fetching information from external APIs.

 when you want to fetch some info. from your resource

data "aws_instance" "myawsinstance" {
   filter {
     name = "tag:name"
     values = ["Terraform EC2"]
  }
  depends_on = [
    "aws_instance.ec2_example"
  ]
}

output "fetched_info_from_aws" {
  value = data.aws_instance.myawsinstance.public_ip
}

here we mentioning resource aws_instance and its data source name myawsinstance
and we defined filter, becz we have multile ec2 intance we mention its tag

then we defined dependency why ? - more generic here we defined we want only for one ec2 resource that is ec2_example

output - here in value data.<resource-name>.<data-sourcename>.public_ip

User_data
in this block we mention our script that we want to run or cmd we want to run

 

depends_on meta tag
its use for define dependency for a resource
example if you want one s3 bucket should create before ec2 instance creation, so you define depends_on in metadata for ec2_instance.
 

 null resource

dont do anything
1.need to mention 2 option trigger and provisioner
2.trigger is optional
3.provisioner any of them(local,remote,datablock)

resource "null_resource" "null_resource_simple" {
   triggers = {
      id = aws_instance.ec2.example.id
  }
   provisioner "Local-exec" {
     command = "echo Hello World"
  }
}

what it does ?
excute cmd
run shell script
run ansible playbook
run python progrm.

what is trigger ?
if the value is changed in trigger
it will excute the provisioner

Rules - if the value is changed inside trigger then it will excute.

 Terraform Lifecycle

Common Lifecycle Configuration Attributes:

  • create_before_destroy: Specifies whether Terraform should create a replacement resource before destroying the existing one during updates.
  • prevent_destroy: Prevents Terraform from deleting a resource. It's commonly used to prevent accidental deletion of critical resources.
  • ignore_changes: Instructs Terraform to ignore changes to specific attributes of a resource during updates.
  • on_create, on_update, on_delete: Specifies provisioners or other actions to run at specific lifecycle events.
  •  
 

resource "aws_instance" "example" {
  # Resource configuration...

  lifecycle {
    create_before_destroy = true
    prevent_destroy       = true
    ignore_changes        = ["tags"]
    on_create {
      # Commands to run after resource creation
    }
    on_delete {
      # Commands to run before resource deletion
    }
  }
}
 

No comments:

Post a Comment