Skip to content
ioob.dev
Go back

Terraform Part 2 — Installation and First Deploy

· 7 min read
Terraform Series (2/15)
  1. Terraform Part 1 — What Is Terraform
  2. Terraform Part 2 — Installation and First Deploy
  3. Terraform Part 3 — HCL Syntax
  4. Terraform Part 4 — Variables and Outputs
  5. Terraform Part 5 — Providers
  6. Terraform Part 6 — Resources and Dependencies
  7. Terraform Part 7 — Data Sources and Import
  8. Terraform Part 8 — State Management
  9. Terraform Part 9 — Modules
  10. Terraform Part 10 — Loops and Conditionals
  11. Terraform Part 11 — Workspaces and Environment Separation
  12. Terraform Part 12 — Kubernetes and Helm Providers
  13. Terraform Part 13 — CI/CD Integration
  14. Terraform Part 14 — Testing and Policy
  15. Terraform Part 15 — Practical Patterns and Pitfalls
Table of contents

Table of contents

From Installation to First Deploy in One Go

We covered enough theory in Part 1. This part is all about getting your hands dirty and experiencing Terraform firsthand. Install it, connect to AWS, spin up an EC2 instance with a single .tf file, then tear it down. We’ll see the meaning of the four commands (init, plan, apply, destroy) with our own eyes.

This guide assumes you have an AWS Free Tier account. If not, you can follow along visually. The concepts are the same.

Installing Terraform

Terraform is a single binary written in Go. Installation is straightforward. Just pick the official method for your operating system.

macOS — Homebrew

brew tap hashicorp/tap
brew install hashicorp/tap/terraform

Add the official HashiCorp tap and install. brew install terraform also works, but using the official tap gives you more reliable version management.

Linux — Package Manager (Ubuntu/Debian Example)

# Register HashiCorp GPG key
wget -O- https://apt.releases.hashicorp.com/gpg | \
  sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg

# Add apt repository
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] \
  https://apt.releases.hashicorp.com $(lsb_release -cs) main" | \
  sudo tee /etc/apt/sources.list.d/hashicorp.list

sudo apt update && sudo apt install terraform

Register the GPG key and repository, then install via apt. For CentOS/Fedora, the process is similar using a yum repository.

Windows — Chocolatey

choco install terraform

One line and you’re done. If you use WSL, the Linux method above also works.

Verify the Version

Check right away whether installation was successful.

terraform version

If the output looks like this, you’re good.

Terraform v1.9.5
on darwin_arm64

Managing Multiple Versions with tfenv

In practice, different projects may require different Terraform versions. tfenv makes it easy to switch between versions per project. Not required, but handy to know.

# macOS
brew install tfenv

# Install and use a specific version
tfenv install 1.9.5
tfenv use 1.9.5

# Pin version per directory (.terraform-version file)
echo "1.9.5" > .terraform-version

If a .terraform-version file exists, it automatically switches to that version when you enter the directory. Same idea as Ruby’s rbenv or Node’s nvm.

Preparing Your AWS Account and CLI

Terraform needs credentials to create anything in AWS. Issue an Access Key and set up an AWS CLI profile.

Creating an IAM User and Issuing an Access Key

In the console: IAM -> Users -> Create user -> Attach policies -> (for labs use AdministratorAccess; in production use least privilege). Navigate to “Security credentials” tab for the created user and click “Create access key.” Select “Command Line Interface (CLI)” as the use case.

The issued Access Key ID and Secret Access Key are shown only once. Be sure to save them somewhere secure. If leaked through a commit mistake, the nightmare begins.

Installing the AWS CLI and Configuring a Profile

# macOS
brew install awscli

# Configure a profile
aws configure --profile terraform-demo

Running aws configure prompts you for four things.

AWS Access Key ID [None]: AKIA....
AWS Secret Access Key [None]: ....
Default region name [None]: ap-northeast-2
Default output format [None]: json

The information you enter is stored in ~/.aws/credentials and ~/.aws/config. You can have multiple profiles.

Verification

aws sts get-caller-identity --profile terraform-demo

If the output shows your account ID and user ARN, you’re ready to go. If you get an error here, the key is wrong or there’s a network issue — fix that first.

Setting Up the Project Structure

Terraform operates at the directory level. It reads all .tf files in the directory and treats them as a single configuration. Create a folder for our lab.

mkdir terraform-first-deploy && cd terraform-first-deploy

The folder is empty. We’ll add files one at a time.

The provider Block — Connecting to AWS

The very first thing to declare is “which provider to use.” Create a providers.tf file.

# providers.tf
terraform {
  required_version = ">= 1.5.0"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }
}

provider "aws" {
  region  = "ap-northeast-2"
  profile = "terraform-demo"
}

Three things are declared here.

Provider details will be explored in Part 5. For now, think of it as “a declaration that connects to AWS.”

First Resources — EC2 and S3

Create main.tf and declare two resources: one Free Tier-eligible t3.micro EC2 instance and one S3 bucket.

# main.tf

# Automatically find the Amazon Linux 2023 AMI
data "aws_ami" "amazon_linux" {
  most_recent = true
  owners      = ["amazon"]

  filter {
    name   = "name"
    values = ["al2023-ami-*-x86_64"]
  }
}

# EC2 instance
resource "aws_instance" "web" {
  ami           = data.aws_ami.amazon_linux.id
  instance_type = "t3.micro"

  tags = {
    Name        = "terraform-first-deploy"
    ManagedBy   = "terraform"
    Environment = "demo"
  }
}

# S3 bucket (names must be globally unique, so add a random suffix)
resource "random_id" "bucket_suffix" {
  byte_length = 4
}

resource "aws_s3_bucket" "demo" {
  bucket = "terraform-first-deploy-${random_id.bucket_suffix.hex}"

  tags = {
    ManagedBy = "terraform"
  }
}

data "aws_ami" finds “the latest Amazon Linux 2023 published by Amazon” at runtime instead of hardcoding the AMI in code. This kind of data source usage is covered in detail in Part 7.

S3 bucket names must be globally unique, so we used a random_id resource to append a 4-byte hex suffix. The random provider also needs to be added to required_providers.

# Add to providers.tf
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
    random = {
      source  = "hashicorp/random"
      version = "~> 3.0"
    }
  }
}

The Four Essential Commands

Now comes the most important moment. We’ll run the four steps of Terraform’s core workflow in order.

flowchart LR
    INIT[terraform init<br/>Download providers] --> PLAN[terraform plan<br/>Preview changes]
    PLAN --> APPLY[terraform apply<br/>Apply changes]
    APPLY --> USE[Operate infrastructure]
    USE --> DESTROY[terraform destroy<br/>Delete everything]
    USE -.->|After editing .tf| PLAN

terraform init

Run this when starting a project for the first time or when providers have changed.

terraform init

This command does two things.

  1. Downloads the providers specified in required_providers from the Terraform Registry (stored in ~/.terraform.d/plugin-cache or .terraform/ within the project)
  2. Initializes the State backend. The default is a local file

The result is a .terraform/ directory and a .terraform.lock.hcl file. The lock file records “pin to this provider version.” Commit it to Git.

terraform plan

This command is Terraform’s most important safety net.

terraform plan

It compares the desired state in .tf files against the current State to show “which resources will be created, changed, or destroyed.” Nothing is actually changed. The output looks roughly like this.

Terraform will perform the following actions:

  # aws_instance.web will be created
  + resource "aws_instance" "web" {
      + ami                          = "ami-0c..."
      + instance_type                = "t3.micro"
      + ...
    }

  # aws_s3_bucket.demo will be created
  + resource "aws_s3_bucket" "demo" {
      + bucket = "terraform-first-deploy-a1b2c3d4"
      + ...
    }

Plan: 3 to add, 0 to change, 0 to destroy.

+ means it will be created, - means deleted, ~ means modified, and -/+ means recreated (delete then create anew). Make a habit of always reviewing this output before apply. Especially if you see -/+, it’s a dangerous change that breaks zero-downtime.

terraform apply

Actually applies the changes shown in the plan.

terraform apply

It shows the plan once more and requires you to type “yes” before proceeding. In automated environments you can bypass this with the -auto-approve flag, but when running manually, always review and confirm.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

aws_s3_bucket.demo: Creating...
aws_instance.web: Creating...
aws_s3_bucket.demo: Creation complete after 2s
aws_instance.web: Still creating... [10s elapsed]
aws_instance.web: Creation complete after 23s [id=i-0abc...]

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.

At this point, if you open the AWS Console, you can confirm the instance and bucket were actually created. A terraform.tfstate file has also been generated — it’s the file where Terraform recorded “the things I created.” Never edit it by hand, and never commit it to Git (in principle, it should be stored in a remote backend. This topic is covered in depth in later parts).

terraform destroy

When the lab is done, clean up all resources.

terraform destroy

Opposite to plan, it plans all resources for “deletion,” then actually deletes them after receiving a yes.

Plan: 0 to add, 0 to change, 3 to destroy.
...
Destroy complete! Resources: 3 destroyed.

Always run destroy when you’re done with labs. Leaving resources beyond the Free Tier can lead to an unwelcome bill next month.

Apply Once More — Experiencing Idempotency

Here’s a simple experiment to feel the core of the declarative approach: idempotency. After creating resources with apply, run apply again without changing anything.

terraform apply

The result looks like this.

No changes. Your infrastructure matches the configuration.

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

It does nothing. This is idempotency. If the “declared state” and the “current state” match, Terraform quietly moves on. An imperative script would have thrown an error like “already exists.”

Now try changing a tag in the .tf file.

tags = {
  Name        = "terraform-first-deploy"
  ManagedBy   = "terraform"
  Environment = "staging"   # demo -> staging
}

Run plan again and the difference is caught precisely.

~ tags = {
    "Environment" = "demo" -> "staging"
    ...
  }

Plan: 0 to add, 1 to change, 0 to destroy.

It changes exactly what changed. This is the power of Terraform.

Seeing Results — The output Block

Sometimes you want to print resource information to the console. Create outputs.tf.

# outputs.tf
output "instance_id" {
  description = "EC2 instance ID"
  value       = aws_instance.web.id
}

output "instance_public_ip" {
  description = "EC2 public IP"
  value       = aws_instance.web.public_ip
}

output "s3_bucket_name" {
  description = "S3 bucket name"
  value       = aws_s3_bucket.demo.bucket
}

These values are printed at the end of terraform apply. You can also query them anytime with terraform output.

terraform output instance_public_ip
# "3.35.xxx.xxx"

Outputs are widely used for passing values between modules, integrating with CI/CD scripts, and more. We’ll cover them in depth in Part 4.

What to Watch Out for When Committing — .gitignore

If you’re committing this project to Git, a .gitignore is essential. Include the following.

# Terraform
.terraform/
.terraform.lock.hcl   # Some projects commit this
*.tfstate
*.tfstate.*
*.tfplan
crash.log
.env
terraform.tfvars      # May contain sensitive variable values
*.auto.tfvars

Two things are critical.

To share State with your team, use a remote backend like S3 + DynamoDB, Terraform Cloud, or GitLab instead of a local file. This topic deserves its own part, which will be covered later in this series, or refer to the HashiCorp official documentation.

.terraform.lock.hcl is the file that pins provider versions, so it’s generally committed. It prevents issues caused by team members using different versions.

Workflow Review

Let’s draw the complete flow one more time.

sequenceDiagram
    participant Dev as Developer
    participant TF as Terraform
    participant State as tfstate
    participant AWS as AWS API

    Dev->>TF: terraform init
    TF->>TF: Download providers
    Dev->>TF: terraform plan
    TF->>State: Query current state
    TF->>AWS: Query resource status
    TF->>Dev: Output changes
    Dev->>TF: terraform apply (yes)
    TF->>AWS: Create/modify/delete resources
    TF->>State: Record results
    TF->>Dev: Completion message
    Note over Dev,AWS: Operate infrastructure
    Dev->>TF: terraform destroy
    TF->>AWS: Delete everything
    TF->>State: Clean up State

These four steps work the same regardless of project scale. What changes is where you store State, how many providers you use, and how you automate plan in CI.

What’s Next

Now that we’ve completed the first deploy, we’re ready to “use” Terraform. At this point, what’s needed to write better code is an understanding of the language itself. You need to know how HCL is structured and what expressions are available to write readable code in real projects.

In the next part, we’ll dive into HCL syntax in earnest — from the structure of blocks, arguments, and expressions to conditionals, for expressions, and built-in functions.


-> Part 3: HCL Syntax


Related Posts

Share this post on:

Comments

Loading comments...


Previous Post
Terraform Part 1 — What Is Terraform
Next Post
Terraform Part 3 — HCL Syntax