Complete Terraform Workflow for AWS S3 & CloudFront

Static Site Deployment Tutorial – Terraform + AWS S3 + CloudFront | Randomize Blog

🚀 Static Site Deployment Tutorial

Learn Infrastructure as Code with Terraform + AWS S3 + CloudFront

📑 Table of Contents (Click to expand/collapse)

1. Quick Start: What is Terraform?

Terraform lets you create AWS infrastructure (S3, CloudFront, etc.) by writing code instead of clicking in the AWS Console.

Basic Workflow

# 1. Write infrastructure code (.tf files) # 2. Run: terraform init # Downloads AWS provider # 3. Run: terraform plan # Shows what will be created # 4. Run: terraform apply # Actually creates resources

Quick Example

Create an S3 bucket with code:

# main.tf provider "aws" { region = "us-east-1" } resource "aws_s3_bucket" "my_bucket" { bucket = "my-static-site-123" } # Run: terraform init && terraform apply # Result: S3 bucket created in AWS!

Key Concepts (Quick Reference)

ConceptExamplePurpose
Providerprovider "aws"Connects to AWS
Resourceresource "aws_s3_bucket"Creates AWS resource
Variablevariable "bucket_name"Makes code reusable
Modulemodule "s3" { source = "./modules/s3" }Reusable code blocks
Outputoutput "url" { value = ... }Shows results after deploy
💡 In Simple Terms: Write code → Terraform creates AWS resources → Your site is deployed!

2. What This Project Does

📦 Get the Project

Clone the repository to get started:

git clone https://github.com/m-saad-siddique/static-site-IAC-deploy.git
🔗 View on GitHub →

This project deploys static websites to AWS using Terraform. What you’ll actually do:

What Gets Created

Your Files
(HTML/CSS/JS)
S3 Bucket
(Private Storage)
CloudFront CDN
(Global Distribution)
HTTPS URL
(Your Live Site)
🔒 OAC Security: Only CloudFront can access S3

Project Files You’ll Use

webgl-deploy/ ├── main.tf # Main config (uses modules) ├── variables.tf # Configurable values ├── environments/dev/ # Dev settings │ └── terraform.tfvars # Edit this for your config └── modules/ # Reusable code ├── s3/ # S3 bucket code └── cloudfront/ # CloudFront code

Quick Workflow

# 1. Initialize Terraform terraform init # 2. Plan deployment terraform plan -var-file=environments/dev/terraform.tfvars # 3. Apply infrastructure terraform apply -var-file=environments/dev/terraform.tfvars # 4. Upload files BUCKET=$(terraform output -raw s3_bucket_id) aws s3 sync ./build s3://$BUCKET/ --delete # 5. Get URL terraform output deployment_url
🎯 Result: Your static site is live on a CloudFront URL in about 5 minutes!

3. Project Structure (Quick Guide)

📁 File Organization

webgl-deploy/ ├── main.tf # Uses modules to create resources ├── variables.tf # Configurable values ├── outputs.tf # Shows URLs after deploy ├── environments/ │ └── dev/terraform.tfvars # Your settings └── modules/ # Reusable code ├── s3/ # Creates S3 bucket └── cloudfront/ # Creates CloudFront

How to Use Files

FileWhat You Do
environments/dev/terraform.tfvarsEdit this to change settings (bucket name, region, etc.)
main.tfUses modules – usually don’t need to edit
modules/s3/main.tfEdit if you need to customize S3 bucket
scripts/*.shOptional helper scripts (see manual steps in tutorial)

Quick Example: Change Bucket Name

# Edit: environments/dev/terraform.tfvars bucket_name = "my-custom-bucket-name" # Then deploy ./scripts/apply.sh dev

Essential Rules

  • Never commit terraform.tfvars with real secrets
  • Always run terraform plan before apply
  • Use modules – Don’t put everything in one file
  • Format code – Run terraform fmt before commit

4. Working with Environments

Use different environments (dev, staging, prod) with separate infrastructure. Workspaces keep them isolated.

Quick Example

# Deploy to dev terraform workspace select dev terraform apply -var-file=environments/dev/terraform.tfvars # Deploy to staging terraform workspace select staging terraform apply -var-file=environments/staging/terraform.tfvars # Deploy to prod terraform workspace select prod terraform apply -var-file=environments/prod/terraform.tfvars # Each environment is separate - safe to destroy one!

How Workspaces Work

# Workspace = Separate state file per environment dev workspace → .terraform/terraform.tfstate.d/dev/ staging workspace → S3: terraform-state-staging/... prod workspace → S3: terraform-state-prod/... # Scripts handle this automatically - you just specify environment!

Environment Configuration Files

# Edit: environments/dev/terraform.tfvars environment = "dev" aws_region = "us-east-1" bucket_name = "my-site-dev" # Edit: environments/prod/terraform.tfvars environment = "prod" aws_region = "us-east-1" bucket_name = "my-site-prod"

Practical Commands

# Deploy to different environments terraform workspace select dev terraform apply -var-file=environments/dev/terraform.tfvars terraform workspace select staging terraform apply -var-file=environments/staging/terraform.tfvars terraform workspace select prod terraform apply -var-file=environments/prod/terraform.tfvars # View what's deployed terraform workspace select dev terraform output # Destroy specific environment terraform workspace select dev terraform destroy -var-file=environments/dev/terraform.tfvars

Manual Workspace Commands (If Needed)

terraform workspace list # List all terraform workspace show # Current workspace terraform workspace select dev # Switch workspace

5. AWS Services (What They Do)

Quick Overview

ServiceWhat It Does
S3Stores your files (HTML, CSS, JS)
CloudFrontServes files globally (CDN)
OACSecurity – only CloudFront can access S3

How It Works (Simple)

1. Upload Files
2. S3 Bucket (Private)
3. CloudFront CDN
4. Users Access HTTPS URL
✅ Fast, Secure, Global Website

Why Private S3 + CloudFront?

  • Security – S3 bucket is private (can’t be accessed directly)
  • Speed – CloudFront CDN is faster than direct S3
  • HTTPS – Free SSL certificate from CloudFront
  • Global – Files cached worldwide for fast access

What You Need to Know

# S3 Bucket - Stores your static files - Private (not publicly accessible) - Encrypted by default # CloudFront - Global CDN (Content Delivery Network) - Serves files from S3 - Provides HTTPS URL - Takes 15-20 min to deploy # OAC (Origin Access Control) - Security feature - Only CloudFront can access S3 - Configured automatically

6. Code Snippets: Configuring Different Services

Now let’s look at actual Terraform code for configuring each service. These examples show you how to write infrastructure as code.

📦 1. S3 Bucket Configuration

Here’s how to create a private S3 bucket with security best practices. Note: These are code examples showing Terraform syntax. This project uses modules (see section 6).

Basic S3 Bucket

# Create S3 bucket resource "aws_s3_bucket" "webgl_bucket" { bucket = "${var.project_name}-${var.environment}-${random_id.bucket_suffix.hex}" # Allow bucket deletion even with objects (useful for dev/staging) force_destroy = true tags = { Environment = var.environment ManagedBy = "Terraform" } } # Block all public access resource "aws_s3_bucket_public_access_block" "webgl_bucket_pab" { bucket = aws_s3_bucket.webgl_bucket.id block_public_acls = true block_public_policy = true ignore_public_acls = true restrict_public_buckets = true }

Enable Encryption

# Server-side encryption resource "aws_s3_bucket_server_side_encryption_configuration" "webgl_bucket_encryption" { bucket = aws_s3_bucket.webgl_bucket.id rule { apply_server_side_encryption_by_default { sse_algorithm = "AES256" # Free, managed by AWS } } }

Enable Versioning

# Enable versioning for rollback capability resource "aws_s3_bucket_versioning" "webgl_bucket_versioning" { bucket = aws_s3_bucket.webgl_bucket.id versioning_configuration { status = var.enable_versioning ? "Enabled" : "Disabled" } }

Configure CORS (for WebGL/Web Apps)

# CORS configuration for cross-origin requests resource "aws_s3_bucket_cors_configuration" "webgl_bucket_cors" { bucket = aws_s3_bucket.webgl_bucket.id cors_rule { allowed_origins = ["*"] # Or specify your domain allowed_methods = ["GET", "HEAD"] allowed_headers = ["*"] max_age_seconds = 3600 } }

S3 Bucket Policy (Allow CloudFront Access)

# Bucket policy that allows only CloudFront to access S3 resource "aws_s3_bucket_policy" "webgl_bucket_policy" { bucket = aws_s3_bucket.webgl_bucket.id policy = jsonencode({ Version = "2012-10-17" Statement = [ { Sid = "AllowCloudFrontServicePrincipal" Effect = "Allow" Principal = { Service = "cloudfront.amazonaws.com" } Action = "s3:GetObject" Resource = "${aws_s3_bucket.webgl_bucket.arn}/*" Condition = { StringEquals = { "AWS:SourceArn" = var.cloudfront_distribution_arn } } } ] }) depends_on = [aws_s3_bucket_public_access_block.webgl_bucket_pab] }

🌐 2. CloudFront Distribution Configuration

Here’s how to create a CloudFront distribution with Origin Access Control. Note: These are code examples. This project uses modules (see section 6).

Create Origin Access Control (OAC)

# Origin Access Control - modern replacement for OAI resource "aws_cloudfront_origin_access_control" "webgl_oac" { name = "${var.distribution_name}-oac" description = "OAC for secure S3 access" origin_access_control_origin_type = "s3" signing_behavior = "always" signing_protocol = "sigv4" }

CloudFront Distribution

# CloudFront distribution resource "aws_cloudfront_distribution" "webgl_distribution" { enabled = true is_ipv6_enabled = true default_root_object = "index.html" comment = "WebGL Distribution for ${var.environment}" # Origin - S3 bucket with OAC origin { domain_name = aws_s3_bucket.webgl_bucket.bucket_regional_domain_name origin_id = "S3-${aws_s3_bucket.webgl_bucket.id}" origin_access_control_id = aws_cloudfront_origin_access_control.webgl_oac.id } # Default cache behavior default_cache_behavior { target_origin_id = "S3-${aws_s3_bucket.webgl_bucket.id}" # Redirect HTTP to HTTPS viewer_protocol_policy = "redirect-to-https" allowed_methods = ["GET", "HEAD", "OPTIONS"] cached_methods = ["GET", "HEAD"] # Use managed cache policy cache_policy_id = "658327ea-f89d-4fab-a63d-7e88639e58f6" # CachingOptimized # Compress objects automatically compress = true # Cache TTL settings min_ttl = 0 default_ttl = 3600 # 1 hour max_ttl = 86400 # 1 day } # Custom error responses for SPA routing custom_error_response { error_code = 403 response_code = 200 response_page_path = "/index.html" error_caching_min_ttl = 300 } custom_error_response { error_code = 404 response_code = 200 response_page_path = "/index.html" error_caching_min_ttl = 300 } # Viewer certificate (use CloudFront default) viewer_certificate { cloudfront_default_certificate = true } # Restrictions restrictions { geo_restriction { restriction_type = "none" } } tags = { Environment = var.environment ManagedBy = "Terraform" } }

🔐 3. IAM Roles and Policies

Here’s how to create IAM roles for CI/CD deployments:

IAM Role for GitHub Actions (OIDC)

# IAM role that can be assumed by GitHub Actions resource "aws_iam_role" "deployment_role" { name = "${var.project_name}-${var.environment}-deployment-role" assume_role_policy = jsonencode({ Version = "2012-10-17" Statement = [ { Effect = "Allow" Principal = { Federated = "arn:aws:iam::${var.aws_account_id}:oidc-provider/token.actions.githubusercontent.com" } Action = "sts:AssumeRoleWithWebIdentity" Condition = { StringEquals = { "token.actions.githubusercontent.com:aud" = "sts.amazonaws.com" } StringLike = { "token.actions.githubusercontent.com:sub" = "repo:${var.github_repo}:*" } } } ] }) tags = { Environment = var.environment ManagedBy = "Terraform" } }

IAM Policy for Deployment

# IAM policy with permissions for S3 and CloudFront resource "aws_iam_role_policy" "deployment_policy" { name = "${var.project_name}-${var.environment}-deployment-policy" role = aws_iam_role.deployment_role.id policy = jsonencode({ Version = "2012-10-17" Statement = [ { Effect = "Allow" Action = [ "s3:PutObject", "s3:GetObject", "s3:DeleteObject", "s3:ListBucket" ] Resource = [ "${aws_s3_bucket.webgl_bucket.arn}", "${aws_s3_bucket.webgl_bucket.arn}/*" ] }, { Effect = "Allow" Action = [ "cloudfront:CreateInvalidation", "cloudfront:GetInvalidation", "cloudfront:ListInvalidations" ] Resource = "${aws_cloudfront_distribution.webgl_distribution.arn}" } ] }) }

📝 4. Variables Configuration

Define variables to make your code reusable:

Variable Definitions

# variables.tf # Project name variable "project_name" { description = "Name of the project" type = string default = "webgl" } # Environment variable "environment" { description = "Environment name (dev, staging, prod)" type = string validation { condition = contains(["dev", "staging", "prod"], var.environment) error_message = "Environment must be one of: dev, staging, prod" } } # AWS Region variable "aws_region" { description = "AWS region for resources" type = string default = "us-east-1" } # S3 Versioning variable "enable_s3_versioning" { description = "Enable S3 bucket versioning" type = bool default = false } # CloudFront Cache TTL variable "default_ttl" { description = "Default TTL for CloudFront cache (in seconds)" type = number default = 3600 # 1 hour } # Common Tags variable "common_tags" { description = "Common tags for all resources" type = map(string) default = { Project = "WebGL Deployment" ManagedBy = "Terraform" } }

Environment-Specific Values

# environments/dev/terraform.tfvars environment = "dev" aws_region = "us-east-1" enable_s3_versioning = false default_ttl = 3600 # 1 hour (faster iteration) # environments/prod/terraform.tfvars environment = "prod" aws_region = "us-east-1" enable_s3_versioning = true # Enable for rollback default_ttl = 31536000 # 1 year (longer cache)

📤 5. Outputs Configuration

Define outputs to get important values after deployment. Note: This project uses modules, so outputs reference module outputs:

# outputs.tf # S3 Bucket ID (from module) output "s3_bucket_id" { description = "Name of the S3 bucket" value = module.s3.bucket_id } # CloudFront Distribution Domain (from module) output "cloudfront_distribution_domain_name" { description = "CloudFront distribution domain name" value = module.cloudfront.distribution_domain_name } # CloudFront Distribution ID (from module) output "cloudfront_distribution_id" { description = "CloudFront distribution ID" value = module.cloudfront.distribution_id } # Full Deployment URL output "deployment_url" { description = "Full HTTPS URL to access the site" value = "https://${module.cloudfront.distribution_domain_name}" } # Example: Direct resource output (if not using modules) # output "s3_bucket_id" { # value = aws_s3_bucket.webgl_bucket.id # }

🧩 6. Using Modules

Organize code into reusable modules:

Module Structure

modules/ ├── s3/ │ ├── main.tf # S3 resources │ ├── variables.tf # Module inputs │ └── outputs.tf # Module outputs └── cloudfront/ ├── main.tf ├── variables.tf └── outputs.tf

Using a Module

# main.tf - Root module # Use S3 module module "s3" { source = "./modules/s3" bucket_name = "${var.project_name}-${var.environment}" environment = var.environment cloudfront_distribution_arn = module.cloudfront.distribution_arn enable_versioning = var.enable_s3_versioning enable_cors = true common_tags = var.common_tags } # Use CloudFront module module "cloudfront" { source = "./modules/cloudfront" distribution_name = "${var.project_name}-${var.environment}" s3_bucket_regional_domain_name = module.s3.bucket_regional_domain_name default_root_object = var.default_root_object default_ttl = var.default_ttl common_tags = var.common_tags }

🔧 7. Provider Configuration

Configure the AWS provider:

# versions.tf terraform { required_version = ">= 1.0" required_providers { aws = { source = "hashicorp/aws" version = "~> 5.0" } random = { source = "hashicorp/random" version = "~> 3.0" } } } # Provider configuration provider "aws" { region = var.aws_region profile = var.aws_profile != "" ? var.aws_profile : null default_tags { tags = var.common_tags } }

🎯 8. Complete Example: Main Configuration

Here’s a complete example combining everything:

# main.tf - Complete example # Random ID for unique bucket names resource "random_id" "bucket_suffix" { byte_length = 4 } # S3 Bucket Module module "s3" { source = "./modules/s3" bucket_name = "${var.project_name}-${var.environment}-${random_id.bucket_suffix.hex}" environment = var.environment cloudfront_distribution_arn = module.cloudfront.distribution_arn enable_versioning = var.enable_s3_versioning enable_cors = true common_tags = var.common_tags } # CloudFront Module module "cloudfront" { source = "./modules/cloudfront" distribution_name = "${var.project_name}-${var.environment}" environment = var.environment unique_suffix = random_id.bucket_suffix.hex s3_bucket_regional_domain_name = module.s3.bucket_regional_domain_name default_root_object = var.default_root_object default_ttl = var.default_ttl common_tags = var.common_tags } # Outputs output "cloudfront_url" { value = "https://${module.cloudfront.distribution_domain_name}" } output "s3_bucket_name" { value = module.s3.bucket_id }

💡 Best Practices in Code

  • Use Variables – Don’t hardcode values
  • Add Descriptions – Document all variables
  • Use Modules – Organize code into reusable components
  • Tag Resources – Add tags for organization
  • Validate Inputs – Use validation blocks
  • Use Locals – Calculate values once, reuse multiple times
  • Add Dependencies – Use depends_on when needed
💡 Tip: Start with simple configurations and gradually add complexity. Use modules to keep your code organized and reusable across environments.

7. Configure AWS on Local Machine

Before you can deploy infrastructure with Terraform, you need to configure AWS authentication on your local machine. This section covers everything you need to set up AWS CLI and authenticate with your AWS account.

📋 Prerequisites

  1. AWS Account – You need an active AWS account
  2. AWS CLI – Command-line tool for interacting with AWS
  3. IAM User with Access Keys – For programmatic access

🔧 Step 1: Install AWS CLI

First, install the AWS Command Line Interface (CLI) on your machine.

macOS Installation

# Using Homebrew (recommended) brew install awscli # Or using pip pip3 install awscli

Linux Installation

# Download and install curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" unzip awscliv2.zip sudo ./aws/install

Windows Installation

Download the AWS CLI MSI installer from aws.amazon.com/cli and run it.

Verify Installation

# Check AWS CLI version aws --version # Should output something like: aws-cli/2.x.x

👤 Step 2: Create IAM User and Access Keys

You need an IAM user with programmatic access (access keys) to use with Terraform.

Create IAM User in AWS Console

  1. Log in to AWS Console
  2. Navigate to IAMUsers
  3. Click Create user
  4. Enter username (e.g., terraform-deploy)
  5. Select Provide user access to the AWS Management Console (optional) or Access key – Programmatic access
  6. Click Next

Attach Permissions

Attach policies that grant necessary permissions. For this project, you need:

  • S3 – Full access (for bucket creation and file uploads)
  • CloudFront – Full access (for distribution creation and invalidation)
  • IAM – Limited access (for role creation, if using CI/CD)
# Recommended: Attach these managed policies - AmazonS3FullAccess - CloudFrontFullAccess - IAMFullAccess (or create custom policy with least privilege)
⚠️ Security Best Practice: For production, create a custom IAM policy with only the minimum required permissions instead of using full access policies.

Create Access Keys

  1. After creating the user, go to the Security credentials tab
  2. Scroll to Access keys section
  3. Click Create access key
  4. Select Command Line Interface (CLI) as the use case
  5. Click Next and then Create access key
  6. IMPORTANT: Download or copy both:
    • Access Key ID
    • Secret Access Key
🔒 Critical: The Secret Access Key is shown only once! Save it securely. If you lose it, you’ll need to create a new access key.

⚙️ Step 3: Configure AWS CLI Profile

This project uses an AWS profile named deploy-config for authentication. You can configure it interactively or non-interactively.

Method 1: Interactive Setup (Recommended)

This is the easiest method – AWS CLI will prompt you for each value:

aws configure --profile deploy-config

You will be prompted to enter:

  • AWS Access Key ID: Your access key ID (e.g., AKIAIOSFODNN7EXAMPLE)
  • AWS Secret Access Key: Your secret access key
  • Default region name: e.g., us-east-1 (recommended for CloudFront)
  • Default output format: json (recommended)

Method 2: Non-Interactive Setup

Use this method if you want to configure via command line or scripts:

# Set access key ID aws configure set aws_access_key_id YOUR_ACCESS_KEY_ID --profile deploy-config # Set secret access key aws configure set aws_secret_access_key YOUR_SECRET_ACCESS_KEY --profile deploy-config # Set default region aws configure set region us-east-1 --profile deploy-config # Set output format aws configure set output json --profile deploy-config
💡 Tip: Replace YOUR_ACCESS_KEY_ID and YOUR_SECRET_ACCESS_KEY with your actual keys from Step 2.

Method 3: Manual Configuration Files

You can also edit the configuration files directly:

Edit Credentials File
# Open credentials file nano ~/.aws/credentials # or use your preferred editor # Add the following section: [deploy-config] aws_access_key_id = YOUR_ACCESS_KEY_ID aws_secret_access_key = YOUR_SECRET_ACCESS_KEY
Edit Config File
# Open config file nano ~/.aws/config # or use your preferred editor # Add the following section: [profile deploy-config] region = us-east-1 output = json
🔒 Security: The credentials file contains sensitive information. Never commit it to version control! It should already be in your .gitignore.

✅ Step 4: Verify AWS Configuration

Verify that your AWS profile is configured correctly:

List All Profiles

# List all configured profiles aws configure list-profiles

Output should include deploy-config:

default deploy-config

Check Profile Configuration

# View profile settings aws configure list --profile deploy-config

Test Authentication

# Test the profile - this should return your AWS account info aws sts get-caller-identity --profile deploy-config

Expected output:

{ "UserId": "AIDAXXXXXXXXXXXXXXXXX", "Account": "123456789012", "Arn": "arn:aws:iam::123456789012:user/terraform-deploy" }
✅ Success! If you see your account ID and user ARN, your AWS profile is configured correctly and you’re ready to use Terraform!

🔍 Step 5: Test AWS Access

Test that you have the necessary permissions:

Test S3 Access

# List S3 buckets (should work if you have S3 permissions) aws s3 ls --profile deploy-config

Test IAM Access

# Get your IAM user info aws iam get-user --profile deploy-config

🌍 Using Different AWS Regions

You can configure different regions for different profiles or override the region per command:

# Use a different region for a specific command aws s3 ls --profile deploy-config --region eu-west-1 # Or set region in environment variable export AWS_DEFAULT_REGION=eu-west-1 aws s3 ls --profile deploy-config

🔄 Using a Different Profile Name

If you want to use a different profile name instead of deploy-config:

# Set environment variable before running scripts export AWS_PROFILE_NAME=my-custom-profile # Then run your scripts normally ./scripts/plan.sh dev

📁 Configuration File Locations

AWS CLI stores configuration in these files:

FileLocationPurpose
credentials~/.aws/credentialsStores access keys (sensitive!)
config~/.aws/configStores region and output format

🔒 Security Best Practices

  • Never Commit Credentials – Add ~/.aws/ to .gitignore
  • Use Least Privilege – Grant only necessary permissions
  • Rotate Access Keys – Change keys regularly (every 90 days recommended)
  • Use IAM Roles – For production, prefer IAM roles over access keys
  • Enable MFA – Use Multi-Factor Authentication for your AWS account
  • Monitor Usage – Check CloudTrail logs for unusual activity
  • Separate Accounts – Use different AWS accounts for dev/staging/prod

🔧 Troubleshooting

Issue: “Unable to locate credentials”

Solution: Make sure you’ve configured the profile:

aws configure --profile deploy-config

Issue: “Access Denied”

Solution: Check that your IAM user has the necessary permissions. Verify with:

aws iam list-attached-user-policies --user-name terraform-deploy --profile deploy-config

Issue: “Invalid credentials”

Solution: Your access keys may have been rotated or deleted. Create new access keys in the IAM console and reconfigure:

aws configure --profile deploy-config

Issue: “Profile not found”

Solution: Verify the profile exists:

# List profiles aws configure list-profiles # If deploy-config is missing, configure it aws configure --profile deploy-config

📋 Quick Reference

# Configure AWS profile aws configure --profile deploy-config # Verify profile aws sts get-caller-identity --profile deploy-config # List all profiles aws configure list-profiles # View profile configuration aws configure list --profile deploy-config # Test S3 access aws s3 ls --profile deploy-config # Use different region aws s3 ls --profile deploy-config --region eu-west-1
✅ Next Steps: Once AWS is configured, you can proceed to install Terraform and start deploying infrastructure!

8. Prerequisites & Setup

Now that AWS is configured, let’s set up the rest of your local environment:

Prerequisites

  1. AWS CLI – ✅ Should be installed and configured (see previous section)
  2. Terraform – Install from terraform.io/downloads
  3. Git – For cloning the repository

Install Terraform

macOS Installation

# Using Homebrew (recommended) brew install terraform # Verify installation terraform version

Linux Installation

# Download Terraform wget https://releases.hashicorp.com/terraform/1.6.0/terraform_1.6.0_linux_amd64.zip unzip terraform_1.6.0_linux_amd64.zip sudo mv terraform /usr/local/bin/ # Verify installation terraform version

Windows Installation

Download Terraform from terraform.io/downloads and add it to your PATH.

Clone Repository

Clone the project repository from GitHub:

git clone https://github.com/m-saad-siddique/static-site-IAC-deploy.git cd static-site-IAC-deploy

🔗 View Repository on GitHub →

Verify Setup

# Check AWS CLI aws --version # Check Terraform terraform version # Verify AWS profile aws sts get-caller-identity --profile deploy-config
✅ Ready! If all commands work, you’re ready to start deploying infrastructure!

📋 Deployment Workflow – Click any step to jump there

💡 Tip: Follow these steps in order for a successful deployment

9. Step 1: Initialize Terraform

Before you can deploy anything, you need to initialize Terraform. This downloads the AWS provider plugin and sets up your workspace.

Manual Steps

Run these commands to initialize Terraform:

# Set AWS profile export AWS_PROFILE=deploy-config # Initialize Terraform (downloads AWS provider) terraform init # Check current workspace terraform workspace show

Expected Output

Initializing Terraform... Downloading AWS provider... Terraform has been successfully initialized! Current workspace: default

Quick Script (Optional)

Or use the helper script that does the same:

./scripts/init.sh

10. Step 2: Plan Your Deployment

Before creating resources, always run a plan. This shows you exactly what Terraform will create, modify, or destroy in AWS.

Manual Steps

Create a plan to preview what Terraform will create:

# Set AWS profile export AWS_PROFILE=deploy-config # Initialize (if using remote backend) terraform init -backend-config=backend/dev.hcl # Select or create workspace terraform workspace select dev || terraform workspace new dev # Create plan with environment variables terraform plan -var-file=environments/dev/terraform.tfvars -out=tfplan-dev.out

What You’ll See

Terraform will perform the following actions: # module.s3.aws_s3_bucket.main will be created + resource "aws_s3_bucket" "main" { + bucket = "webgl-deploy-dev-abc123" } # module.cloudfront.aws_cloudfront_distribution.main will be created + resource "aws_cloudfront_distribution" "main" { + domain_name = "..." } Plan: 5 to add, 0 to change, 0 to destroy.

Understanding the Symbols

+ = Will be created - = Will be destroyed ~ = Will be modified -/+ = Will be destroyed and recreated
⚠️ Always review the plan! Check that the right resources are being created for the correct environment.

Quick Script (Optional)

Or use the helper script:

./scripts/plan.sh dev

11. Step 3: Apply Infrastructure

After reviewing the plan, apply it to actually create the resources in AWS. This will create your S3 bucket and CloudFront distribution.

Manual Steps

Apply the plan to create resources in AWS:

# Set AWS profile export AWS_PROFILE=deploy-config # Select workspace terraform workspace select dev # Apply using saved plan (recommended) terraform apply tfplan-dev.out # OR apply directly (will prompt for confirmation) terraform apply -var-file=environments/dev/terraform.tfvars # View outputs terraform output

What Happens

# Terraform will: 1. Create S3 bucket (takes ~10 seconds) 2. Create CloudFront distribution (takes ~2 minutes) 3. Configure security (OAC, policies) 4. Show you the URLs

Expected Output

aws_s3_bucket.main: Creating... aws_s3_bucket.main: Creation complete aws_cloudfront_distribution.main: Creating... aws_cloudfront_distribution.main: Creation complete Apply complete! Resources: 5 added. Outputs: s3_bucket_id = "webgl-deploy-dev-abc123" deployment_url = "https://d1234567890.cloudfront.net"
⏱️ Wait Time: CloudFront takes 15-20 minutes to fully deploy. URL works immediately but may show “not ready” until complete.

Quick Script (Optional)

Or use the helper script:

./scripts/apply.sh dev

12. Step 4: Upload Your Static Site

Now that your infrastructure is ready, upload your static website files (HTML, CSS, JS) to the S3 bucket. Then invalidate the CloudFront cache so users see the latest version.

Manual Steps

Upload your static site files to S3 and invalidate CloudFront cache:

# Set AWS profile export AWS_PROFILE=deploy-config # Select workspace terraform workspace select dev # Get bucket name from Terraform output BUCKET=$(terraform output -raw s3_bucket_id) # Upload files to S3 (replace ./build with your build directory) aws s3 sync ./build s3://$BUCKET/ --delete # Get CloudFront distribution ID DIST_ID=$(terraform output -raw cloudfront_distribution_id) # Invalidate CloudFront cache aws cloudfront create-invalidation \ --distribution-id $DIST_ID \ --paths "/*"

Common Build Directories

# React/Vue/Angular aws s3 sync ./build s3://$BUCKET/ --delete aws s3 sync ./dist s3://$BUCKET/ --delete # WebGL (Unity/Unreal) aws s3 sync ./WebGLBuild s3://$BUCKET/ --delete # Static HTML aws s3 sync ./public s3://$BUCKET/ --delete

Expected Output

upload: build/index.html to s3://webgl-deploy-dev-abc123/index.html upload: build/style.css to s3://webgl-deploy-dev-abc123/style.css ... CloudFront invalidation created: I1234567890

Quick Script (Optional)

Or use the helper script:

./scripts/upload.sh dev ./build

13. Step 5: Verify Deployment

Finally, get your CloudFront URL and verify that your site is live and working correctly.

Manual Steps

Get your deployment URL and verify everything works:

# Set AWS profile export AWS_PROFILE=deploy-config # Select workspace terraform workspace select dev # View all outputs (includes your CloudFront URL) terraform output # Get just the deployment URL terraform output -raw deployment_url # List files in S3 bucket BUCKET=$(terraform output -raw s3_bucket_id) aws s3 ls s3://$BUCKET/ --recursive

Expected Output

s3_bucket_id = "webgl-deploy-dev-abc123" deployment_url = "https://d1234567890.cloudfront.net" # Copy the deployment_url and open in browser!

Quick Tests

# 1. Open URL in browser # 2. Check it loads (may take 15-20 min for CloudFront) # 3. Check HTTPS (should be secure) # 4. Test all pages work

Quick Script (Optional)

Or use the helper script:

./scripts/outputs.sh dev
✅ Done! Your site is live at the CloudFront URL! 🎉

14. Configure CI/CD with GitHub Actions

Set up automated deployments using GitHub Actions. This allows your infrastructure and site to deploy automatically when you push code to GitHub, without storing AWS credentials in GitHub.

🎯 What is CI/CD?

CI/CD (Continuous Integration/Continuous Deployment) automates your deployment process:

  • Automatic Deployment – Deploys when you push code
  • No Manual Steps – No need to run scripts locally
  • Secure – Uses OIDC (OpenID Connect) instead of storing AWS keys
  • Consistent – Same process every time
  • Traceable – All deployments logged in GitHub Actions
🔐 Security: CI/CD uses IAM roles with OIDC (OpenID Connect) instead of AWS access keys. This means no secrets are stored in GitHub – authentication happens securely through AWS!

📋 Prerequisites

Before setting up CI/CD, ensure you have:

  • ✅ AWS account configured locally (see Section 7)
  • ✅ GitHub repository with your code
  • ✅ Admin access to the GitHub repository
  • ✅ Terraform infrastructure already deployed (or ready to deploy)

🔧 Step 1: Set Up Remote State Backend

CI/CD requires remote state (S3 + DynamoDB) so multiple deployments can share the same state file.

Create Remote State Resources

Use the helper script to create S3 bucket and DynamoDB table for Terraform state:

# For staging environment ./scripts/setup-remote-state.sh staging # For production environment ./scripts/setup-remote-state.sh prod

This script:

  • Creates S3 bucket for state storage
  • Creates DynamoDB table for state locking
  • Updates backend/staging.hcl or backend/prod.hcl automatically

Verify Backend Configuration

Check that the backend files are updated:

# Check staging backend cat backend/staging.hcl # Should show: # bucket = "terraform-state-staging-..." # key = "static-site-deploy/staging/terraform.tfstate" # region = "us-east-1" # dynamodb_table = "terraform-state-lock-staging-..." # encrypt = true

🔐 Step 2: Create IAM Roles with OIDC

Create IAM roles that GitHub Actions can assume using OIDC (no AWS keys needed!).

Get Your AWS Account ID

# Get your AWS account ID AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text) echo $AWS_ACCOUNT_ID

Get Your GitHub Repository

Your repository should be in the format: owner/repository-name

Example: m-saad-siddique/static-site-IAC-deploy

Create IAM Roles

Run the setup script for each environment:

# Create staging IAM role ./scripts/setup-iam-oidc.sh staging $AWS_ACCOUNT_ID m-saad-siddique/static-site-IAC-deploy # Create production IAM role ./scripts/setup-iam-oidc.sh prod $AWS_ACCOUNT_ID m-saad-siddique/static-site-IAC-deploy

Replace:

  • $AWS_ACCOUNT_ID – Your AWS account ID from the previous step
  • m-saad-siddique/static-site-IAC-deploy – Your GitHub repository (owner/repo)

What the Script Creates

The script automatically creates:

  • OIDC Provider – Connects AWS to GitHub (created once per AWS account)
  • IAM Policy – Permissions for S3, CloudFront, and Terraform operations
  • IAM Role – Can be assumed by GitHub Actions
  • Trust Policy – Allows GitHub Actions to assume the role

Capture Role ARNs

The script will output the role ARN. Copy this value – you’ll need it for GitHub Secrets:

✅ IAM Role created successfully! Role ARN: arn:aws:iam::123456789012:role/webgl-staging-deployment-role 📝 Next steps: 1. Add this ARN to GitHub Secrets as: AWS_ROLE_ARN_STAGING 2. Add AWS_ACCOUNT_ID to GitHub Secrets: 123456789012
⚠️ Important: Save the Role ARN from the script output. You’ll need it in the next step!

🔑 Step 3: Configure GitHub Secrets

Add secrets to your GitHub repository so workflows can authenticate with AWS.

Navigate to GitHub Secrets

  1. Go to your GitHub repository
  2. Click Settings (top menu)
  3. Click Secrets and variablesActions (left sidebar)
  4. Click New repository secret

Add Required Secrets

Add these three secrets:

Secret 1: AWS_ACCOUNT_ID
  • Name: AWS_ACCOUNT_ID
  • Value: Your AWS account ID (e.g., 123456789012)
Secret 2: AWS_ROLE_ARN_STAGING
  • Name: AWS_ROLE_ARN_STAGING
  • Value: The staging role ARN from Step 2 (e.g., arn:aws:iam::123456789012:role/webgl-staging-deployment-role)
Secret 3: AWS_ROLE_ARN_PROD
  • Name: AWS_ROLE_ARN_PROD
  • Value: The production role ARN from Step 2 (e.g., arn:aws:iam::123456789012:role/webgl-prod-deployment-role)
✅ Security Note: GitHub Secrets are encrypted and never exposed in logs. They’re only available to GitHub Actions workflows.

📝 Step 4: Deploy Infrastructure with Terraform

Deploy your infrastructure using Terraform. The IAM roles are already created, so Terraform can use them.

Deploy Staging

# Plan staging deployment ./scripts/plan.sh staging # Apply staging infrastructure ./scripts/apply.sh staging

Deploy Production

# Plan production deployment ./scripts/plan.sh prod # Apply production infrastructure ./scripts/apply.sh prod

⚙️ Step 5: Understand GitHub Actions Workflows

The project includes pre-configured workflows in .github/workflows/:

Workflow Files

FileTriggers OnEnvironment
deploy-staging.ymlPush to staging branchStaging
deploy-prod.ymlPush to main branchProduction

What Workflows Do

Each workflow performs these steps:

  1. Checkout Code – Gets code from repository
  2. Configure AWS Credentials – Uses OIDC to assume IAM role
  3. Setup Terraform – Installs Terraform
  4. Initialize Terraform – Downloads providers, configures backend
  5. Select Workspace – Chooses environment workspace
  6. Plan & Apply – Creates/updates infrastructure
  7. Upload Files – Syncs build files to S3
  8. Invalidate CloudFront – Clears cache

Example Workflow Structure

name: Deploy to Staging on: push: branches: [staging] permissions: id-token: write # Required for OIDC contents: read jobs: deploy: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Configure AWS credentials uses: aws-actions/configure-aws-credentials@v4 with: role-to-assume: ${{ secrets.AWS_ROLE_ARN_STAGING }} aws-region: us-east-1 - name: Setup Terraform uses: hashicorp/setup-terraform@v3 - name: Terraform Init run: terraform init -backend-config=backend/staging.hcl - name: Terraform Apply run: terraform apply -auto-approve

🚀 Step 6: Test CI/CD

Test your CI/CD setup by pushing code to trigger a deployment.

Test Staging Deployment

# Make a small change (e.g., update a comment) echo "# Test CI/CD" >> README.md # Commit and push to staging branch git add README.md git commit -m "Test CI/CD deployment" git push origin staging

Monitor Deployment

  1. Go to your GitHub repository
  2. Click Actions tab
  3. You should see “Deploy to Staging” workflow running
  4. Click on the workflow to see detailed logs

Verify Deployment

Once the workflow completes:

  • ✅ Check the workflow shows green checkmark (success)
  • ✅ Verify your site is updated at the CloudFront URL
  • ✅ Check workflow logs for any warnings

🔄 How CI/CD Works (OIDC Flow)

Here’s how the secure authentication works:

1. Push to GitHub
2. GitHub Actions Starts
3. Request OIDC Token
4. AWS Validates & Issues Credentials
5. Terraform Deploys Infrastructure
6. Upload Files to S3
7. Invalidate CloudFront Cache
✅ Site Live!
🔐 Security Benefits:
  • No AWS keys stored in GitHub
  • Temporary credentials (expire after 1 hour)
  • Scoped to specific repository
  • Can restrict to specific branches
  • All actions logged in CloudTrail

🌿 Branch Strategy

This project uses a three-branch workflow:

BranchEnvironmentCI/CDPurpose
devDevelopmentOptionalLocal testing and development
stagingStaging✅ Auto-deployPre-production testing
mainProduction✅ Auto-deployLive production environment

🔧 Step 7: Customize Workflows (Optional)

You can customize workflows to match your needs:

Change Build Directory

If your build files are in a different directory:

# In .github/workflows/deploy-staging.yml # Find the "Upload to S3" step and change: - name: Upload to S3 run: | aws s3 sync ./your-build-directory s3://$BUCKET/ --delete

Add Build Step

If you need to build your site before deploying:

# Add before "Upload to S3" step: - name: Build Static Site run: | npm install npm run build # Or your build commands

Add Notifications

Get notified when deployments complete:

# Add at the end of workflow: - name: Notify on Success if: success() run: | echo "✅ Deployment successful!" # Add Slack/Discord/email notification here

🔍 Troubleshooting CI/CD

Issue: “Not authorized to perform sts:AssumeRoleWithWebIdentity”

Solution:

  • Verify OIDC provider exists: aws iam list-open-id-connect-providers
  • Check repository name matches in IAM role trust policy
  • Ensure id-token: write permission is in workflow

Issue: “Role ARN not found”

Solution:

  • Verify GitHub Secret AWS_ROLE_ARN_STAGING or AWS_ROLE_ARN_PROD exists
  • Check the ARN is correct (no typos)
  • Verify the role exists in AWS: aws iam get-role --role-name webgl-staging-deployment-role

Issue: “State lock error”

Solution:

  • Another deployment might be running – wait for it to complete
  • If stuck, manually unlock: Check DynamoDB table for lock entry

Issue: “Backend configuration not found”

Solution:

  • Verify backend/staging.hcl or backend/prod.hcl exists
  • Check the file contains valid bucket and table names
  • Ensure remote state resources were created with setup-remote-state.sh

Issue: “Workflow not triggering”

Solution:

  • Check workflow file is in .github/workflows/ directory
  • Verify branch name matches workflow trigger (e.g., staging or main)
  • Check GitHub Actions is enabled in repository settings

📋 Quick Reference

# 1. Setup remote state ./scripts/setup-remote-state.sh staging ./scripts/setup-remote-state.sh prod # 2. Get AWS account ID AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text) # 3. Create IAM roles ./scripts/setup-iam-oidc.sh staging $AWS_ACCOUNT_ID m-saad-siddique/static-site-IAC-deploy ./scripts/setup-iam-oidc.sh prod $AWS_ACCOUNT_ID m-saad-siddique/static-site-IAC-deploy # 4. Add GitHub Secrets: # - AWS_ACCOUNT_ID # - AWS_ROLE_ARN_STAGING # - AWS_ROLE_ARN_PROD # 5. Deploy infrastructure ./scripts/apply.sh staging ./scripts/apply.sh prod # 6. Test CI/CD git push origin staging # Triggers staging deployment git push origin main # Triggers production deployment

✅ CI/CD Checklist

  • ✅ Remote state backend configured (S3 + DynamoDB)
  • ✅ IAM roles created with OIDC
  • ✅ GitHub Secrets added (AWS_ACCOUNT_ID, role ARNs)
  • ✅ Infrastructure deployed with Terraform
  • ✅ Workflow files in .github/workflows/
  • ✅ Tested deployment by pushing to branch
  • ✅ Verified deployment in GitHub Actions
🎉 Congratulations! Your CI/CD pipeline is now configured! Every push to staging or main will automatically deploy your infrastructure and site.

Workflows & Different Environments

This project supports multiple environments (dev, staging, production) with separate infrastructure for each.

Branch Strategy

This project uses a three-branch workflow:

BranchEnvironmentDeploymentPurpose
devDevelopmentAuto (on push)Fast iteration, testing
stagingStagingAuto (on push)Pre-production testing
mainProductionAuto (with optional approval)Live production environment

Deployment Workflow

Local Development – Test on dev locally
Push to devgit push origin dev
Auto-deploy dev – CI/CD deploys to dev environment
Merge to staging – Create PR: dev → staging
Auto-deploy staging – CI/CD deploys to staging
Merge to main – Create PR: staging → main
Auto-deploy prod – CI/CD deploys to production

Terraform Workspaces

Each environment uses a separate Terraform workspace for state isolation:

  • dev workspace – Manages dev environment state
  • staging workspace – Manages staging environment state
  • prod workspace – Manages production environment state
🔒 State Isolation: Each environment has its own state file. Destroying one environment doesn’t affect others.

Managing Multiple Environments

Deploy to Specific Environment

# Deploy dev ./scripts/plan.sh dev ./scripts/apply.sh dev # Deploy staging ./scripts/plan.sh staging ./scripts/apply.sh staging # Deploy prod ./scripts/plan.sh prod ./scripts/apply.sh prod

View Environment Outputs

./scripts/outputs.sh dev # Dev environment ./scripts/outputs.sh staging # Staging environment ./scripts/outputs.sh prod # Production environment

Destroy Specific Environment

# Destroy only staging (dev and prod remain) ./scripts/destroy.sh staging # Destroy only dev ./scripts/destroy.sh dev

Branch Protection Rules

To enforce the workflow, set up branch protection in GitHub:

  1. Go to Repository SettingsBranches
  2. Add rule for staging:
    • Require pull request before merging
    • Do not allow deleting this branch
    • Restrict direct pushes
  3. Add rule for main:
    • Require pull request before merging
    • Require approvals (optional)
    • Do not allow deleting this branch
    • Restrict direct pushes
  4. Add rule for dev:
    • Do not allow deleting this branch
⚠️ Important: The branch-protection.yml workflow automatically validates PR rules. It blocks invalid merges (e.g., direct PR to main from feature branch).

Environment Configuration

Each environment has its own configuration file:

  • environments/dev/terraform.tfvars – Dev settings
  • environments/staging/terraform.tfvars – Staging settings
  • environments/prod/terraform.tfvars – Production settings

Key differences:

SettingDevStagingProd
IAM RolesDisabledEnabled (OIDC)Enabled (OIDC)
Cache TTL1 hour1 day1 year
VersioningDisabledEnabledEnabled

15. Essential Best Practices

🔒 Security (Must Do)

# 1. Never commit secrets echo "*.tfvars" >> .gitignore echo ".aws/" >> .gitignore # 2. Always plan before apply terraform plan -var-file=environments/dev/terraform.tfvars terraform apply -var-file=environments/dev/terraform.tfvars # 3. Use separate environments terraform workspace select dev terraform apply -var-file=environments/dev/terraform.tfvars

📝 Code Quality (Quick Wins)

# Format code before committing terraform fmt # Validate syntax terraform validate # Check what will change terraform plan

✅ Quick Checklist

Before deploying: □ Run terraform fmt □ Run terraform validate □ Run terraform plan (review output) □ Test in dev first □ Never commit .tfvars with secrets

16. Troubleshooting

Common issues and how to resolve them:

Common Issues

Issue: “AWS profile ‘deploy-config’ is not configured”

Solution:

aws configure --profile deploy-config

Issue: “Workspace doesn’t exist”

Solution: Workspace is created automatically on first plan/apply. Just run:

./scripts/plan.sh dev

Issue: “Unable to determine S3 bucket name”

Solution: Make sure you’ve run terraform apply first:

./scripts/apply.sh dev

Issue: CI/CD workflow fails with “Role not found”

Solution:

  1. Verify IAM roles are created: ./scripts/outputs.sh staging
  2. Check GitHub Secrets are set correctly
  3. Verify OIDC is configured in terraform.tfvars

Issue: CloudFront shows “Distribution not ready”

Solution: Wait 15-20 minutes. CloudFront distributions take time to deploy globally.

Useful Commands

# List all workspaces terraform workspace list # Show current workspace terraform workspace show # Switch workspace manually terraform workspace select staging # View Terraform state terraform state list # Check AWS profile aws sts get-caller-identity --profile deploy-config

17. Frequently Asked Questions (FAQ)

Common questions and answers about Terraform, AWS, and this deployment project.

🤔 General Questions

Q: What is Infrastructure as Code (IaC)?

A: Infrastructure as Code is the practice of managing and provisioning infrastructure through machine-readable definition files, rather than through manual processes. Terraform is an IaC tool that lets you define infrastructure in code and version control it.

Q: Why use Terraform instead of AWS Console?

A: Terraform provides:

  • ✅ Version control for infrastructure
  • ✅ Reproducible deployments
  • ✅ Ability to review changes before applying
  • ✅ Documentation of your infrastructure
  • ✅ Multi-cloud support
  • ✅ Team collaboration

Q: Do I need to know programming to use Terraform?

A: No! Terraform uses HCL (HashiCorp Configuration Language), which is declarative and easy to learn. You describe what you want, and Terraform figures out how to create it.

🔧 Terraform Questions

Q: What’s the difference between terraform plan and terraform apply?

A:

  • terraform plan: Shows what changes will be made without actually making them. Safe to run multiple times.
  • terraform apply: Actually creates, modifies, or destroys resources. This makes real changes to your infrastructure.

Always run plan before apply to review changes!

Q: What is a Terraform workspace?

A: A workspace is a named state container. Each workspace has its own state file, allowing you to manage multiple environments (dev, staging, prod) with the same Terraform code but separate infrastructure.

Q: Can I use the same Terraform code for multiple environments?

A: Yes! That’s one of Terraform’s strengths. Use:

  • Different .tfvars files per environment
  • Different workspaces per environment
  • Different backend configurations per environment

This keeps your code DRY (Don’t Repeat Yourself).

Q: What happens if I delete a resource manually in AWS Console?

A: Terraform will detect the resource is missing on the next plan or apply and try to recreate it. To sync Terraform state with reality, use terraform apply -refresh-only or terraform import.

Q: How do I update Terraform provider versions?

A: Update the version constraint in versions.tf:

required_providers { aws = { source = "hashicorp/aws" version = "~> 5.0" # Update this } }

Then run terraform init -upgrade to download the new version.

☁️ AWS Questions

Q: Why is my S3 bucket private? Can’t I make it public?

A: This project uses a private S3 bucket with CloudFront for security and performance:

  • Security: Private buckets are more secure
  • Performance: CloudFront CDN is faster than direct S3 access
  • HTTPS: CloudFront provides free SSL certificates

You can make S3 public, but it’s not recommended for production.

Q: What is Origin Access Control (OAC)?

A: OAC is AWS’s modern way to securely connect CloudFront to private S3 buckets. It replaces the older OAI (Origin Access Identity). OAC ensures only CloudFront can access your S3 bucket, keeping it private while allowing CloudFront to serve content.

Q: How long does CloudFront take to deploy?

A: CloudFront distributions typically take 15-20 minutes to fully deploy globally. The URL is available immediately, but it may show “Distribution not ready” until deployment completes.

Q: Why do I need DynamoDB for Terraform state?

A: DynamoDB provides state locking, which prevents multiple people (or CI/CD runs) from modifying infrastructure simultaneously. This prevents conflicts and corruption of your Terraform state.

Q: Can I use a custom domain with CloudFront?

A: Yes! You need to:

  1. Request an SSL certificate in AWS Certificate Manager (ACM)
  2. Add the certificate ARN to your CloudFront configuration
  3. Add your domain as an alias in CloudFront
  4. Create a CNAME record in your DNS pointing to CloudFront

🔐 Security Questions

Q: Is it safe to store Terraform state in S3?

A: Yes, if you:

  • ✅ Enable encryption at rest
  • ✅ Enable versioning
  • ✅ Use bucket policies to restrict access
  • ✅ Never commit state files to version control

State files may contain sensitive data, so always encrypt them.

Q: How does OIDC authentication work?

A: OIDC (OpenID Connect) allows GitHub Actions to request temporary AWS credentials without storing AWS keys:

  1. GitHub generates a JWT token with repository information
  2. AWS validates the token against the OIDC provider
  3. AWS issues temporary credentials (valid for 1 hour)
  4. GitHub Actions uses these credentials to deploy

This is more secure than storing long-lived AWS access keys.

Q: Should I use the same AWS account for dev, staging, and prod?

A: For production, it’s best practice to use separate AWS accounts:

  • ✅ Better security isolation
  • ✅ Prevents accidental production changes
  • ✅ Easier compliance and auditing

For learning or small projects, separate accounts per environment is fine.

🚀 Deployment Questions

Q: How do I update my static site after deployment?

A: Upload new files:

# Local deployment ./scripts/upload.sh dev ./build # Or manually aws s3 sync ./build s3://your-bucket/ --delete aws cloudfront create-invalidation --distribution-id YOUR_DIST_ID --paths "/*"

Or push to GitHub if CI/CD is configured – it will deploy automatically!

Q: Why do I need to invalidate CloudFront cache?

A: CloudFront caches files at edge locations. After uploading new files to S3, you need to invalidate the cache so users get the latest version. Without invalidation, users might see old cached content.

Q: Can I deploy to multiple regions?

A: CloudFront is global, so you don’t need multiple regions. However, if you want S3 buckets in multiple regions, you can:

  • Create separate Terraform configurations per region
  • Use Terraform workspaces with region-specific backends
  • Use CloudFront with multiple origins

Q: How do I rollback a deployment?

A: Several options:

  • Upload previous version: Upload the previous build files to S3
  • Terraform rollback: Use terraform state commands or restore from state backup
  • S3 versioning: If enabled, restore previous object versions

🛠️ Troubleshooting Questions

Q: Terraform says “state is locked” – what do I do?

A: Another operation is running. Options:

  • Wait for the other operation to complete
  • If stuck, manually unlock: terraform force-unlock LOCK_ID
  • For remote state, check DynamoDB table for lock entries

Warning: Only unlock if you’re sure no operation is running!

Q: My CloudFront URL shows “Access Denied” – why?

A: Common causes:

  • CloudFront distribution not fully deployed (wait 15-20 minutes)
  • OAC not properly configured
  • S3 bucket policy missing or incorrect
  • No files uploaded to S3 yet

Q: How do I see what Terraform will change before applying?

A: Always use terraform plan:

./scripts/plan.sh dev # Shows planned changes

The plan shows what will be created (+), modified (~), or destroyed (-).

Q: Can I destroy everything and start over?

A: Yes, but be careful:

# Destroy specific environment ./scripts/destroy.sh dev # This removes ALL resources in that environment

Warning: This is irreversible! Make sure you have backups if needed.

💡 Tips & Tricks

Q: How can I speed up Terraform operations?

A:

  • Use -parallelism flag to run operations in parallel
  • Cache Terraform providers between runs
  • Use targeted operations: terraform apply -target=resource
  • Avoid unnecessary terraform apply -refresh-only calls

Q: How do I share Terraform state with my team?

A: Use remote state (S3 backend):

  • Configure backend/*.hcl files
  • Use DynamoDB for state locking
  • All team members use the same backend
  • State is automatically shared and locked

Q: Can I use this for non-WebGL static sites?

A: Absolutely! This works for any static site:

  • React, Vue, Angular apps
  • Jekyll, Hugo, Gatsby sites
  • Plain HTML/CSS/JS
  • Documentation sites
  • Any static content

Q: How do I add more AWS services?

A: Add resources to your Terraform configuration:

  • Create new .tf files or add to existing ones
  • Use Terraform modules for reusable components
  • Update variables and outputs as needed
  • Test in dev environment first
💡 Have More Questions? Check the official documentation:

18. Command Reference

🔧 Terraform Commands

Basic Commands

CommandDescription
terraform initInitialize Terraform, download providers
terraform init -backend-config=backend/dev.hclInitialize with remote backend configuration
terraform fmtFormat all .tf files in current directory
terraform fmt -recursiveFormat all .tf files recursively
terraform validateValidate Terraform configuration syntax
terraform showShow current state in human-readable format
terraform state listList all resources in current state
terraform state show resourceShow details of specific resource

Workspace Commands

CommandDescription
terraform workspace listList all available workspaces
terraform workspace showDisplay current workspace name
terraform workspace select devSwitch to specified workspace
terraform workspace new stagingCreate new workspace
terraform workspace delete stagingDelete workspace (must destroy resources first)

Plan & Apply Commands

CommandDescription
terraform planCreate execution plan (preview changes)
terraform plan -var-file=environments/dev/terraform.tfvarsPlan with environment-specific variables
terraform plan -out=tfplan.outSave plan to file for later use
terraform applyApply changes (will prompt for confirmation)
terraform apply -var-file=environments/dev/terraform.tfvarsApply with environment-specific variables
terraform apply tfplan.outApply previously saved plan
terraform apply -auto-approveApply without confirmation prompt
terraform destroyDestroy all resources in state
terraform destroy -var-file=environments/dev/terraform.tfvarsDestroy with environment-specific variables
terraform destroy -target=resourceDestroy only specific resource

Output Commands

CommandDescription
terraform outputDisplay all output values
terraform output deployment_urlDisplay specific output value
terraform output -raw s3_bucket_idDisplay raw output value (no quotes)
terraform output -jsonDisplay outputs in JSON format

State Management

CommandDescription
terraform state listList all resources in current state
terraform state show aws_s3_bucket.mainShow detailed information about specific resource
terraform state rm aws_s3_bucket.mainRemove resource from state (doesn’t destroy it in AWS)
terraform import aws_s3_bucket.main bucket-nameImport existing AWS resource into Terraform state
terraform state mv old_address new_addressMove resource to different address in state
terraform force-unlock LOCK_IDForce unlock state (use with caution)

☁️ AWS CLI Commands

Configuration

CommandDescription
aws configureConfigure AWS CLI with default profile
aws configure --profile deploy-configConfigure named AWS profile
aws configure list-profilesList all configured AWS profiles
export AWS_PROFILE=deploy-configSet default AWS profile for current session
aws sts get-caller-identityVerify AWS credentials and get account info

S3 Commands

CommandDescription
aws s3 lsList all S3 buckets
aws s3 ls s3://bucket-name/List objects in bucket
aws s3 ls s3://bucket-name/ --recursiveList all objects recursively
aws s3 cp file.txt s3://bucket-name/Upload single file to S3
aws s3 sync ./build s3://bucket-name/Upload directory to S3 (sync)
aws s3 sync ./build s3://bucket-name/ --deleteSync directory and delete removed files
aws s3 cp s3://bucket-name/file.txt ./Download file from S3
aws s3 sync s3://bucket-name/ ./download/Download directory from S3
aws s3 rm s3://bucket-name/file.txtDelete file from S3
aws s3 rm s3://bucket-name/ --recursiveDelete all files in bucket
aws s3 mb s3://bucket-nameCreate new S3 bucket
aws s3 rb s3://bucket-nameDelete empty S3 bucket
aws s3 rb s3://bucket-name --forceDelete bucket and all contents

CloudFront Commands

CommandDescription
aws cloudfront list-distributionsList all CloudFront distributions
aws cloudfront get-distribution --id DISTRIBUTION_IDGet detailed information about distribution
aws cloudfront create-invalidation --distribution-id DIST_ID --paths "/*"Invalidate CloudFront cache for specified paths
aws cloudfront list-invalidations --distribution-id DIST_IDList all invalidations for distribution
aws cloudfront get-invalidation --distribution-id DIST_ID --id INV_IDGet status of specific invalidation

IAM Commands

CommandDescription
aws sts get-caller-identityGet current AWS user/role identity
aws iam list-usersList all IAM users
aws iam list-rolesList all IAM roles
aws iam get-role --role-name role-nameGet details of specific IAM role
aws iam list-policiesList all IAM policies
aws iam attach-role-policy --role-name role-name --policy-arn POLICY_ARNAttach policy to IAM role

💻 Shell/Bash Commands

Environment Variables

CommandDescription
export AWS_PROFILE=deploy-configSet AWS profile for current session
export AWS_DEFAULT_REGION=us-east-1Set default AWS region
export AWS_ACCESS_KEY_ID=your-keySet AWS access key (not recommended, use profiles)
export AWS_SECRET_ACCESS_KEY=your-secretSet AWS secret key (not recommended, use profiles)
echo $AWS_PROFILEDisplay value of environment variable
env | grep AWSList all AWS-related environment variables

File Operations

CommandDescription
cd /path/to/directoryChange to specified directory
cd ..Go up one directory level
cd ~Go to home directory
lsList files in current directory
ls -laList files with details and hidden files
ls -lhList files with human-readable sizes
mkdir directory-nameCreate new directory
mkdir -p path/to/directoryCreate directory and parent directories
rm file.txtRemove file
rm -r directory/Remove directory recursively
rm -rf directory/Force remove directory
cp source.txt dest.txtCopy file
cp -r source/ dest/Copy directory recursively
mv old.txt new.txtMove or rename file
mv file.txt directory/Move file to directory

Command Chaining & Variables

CommandDescription
command1 && command2Run command2 only if command1 succeeds
command1 || command2Run command2 only if command1 fails
command1 ; command2Run both commands regardless of result
BUCKET=$(terraform output -raw s3_bucket_id)Store command output in variable
echo $BUCKETDisplay variable value
aws s3 ls s3://$BUCKET/Use variable in command
if [ $? -eq 0 ]; then echo "Success"; fiCheck if previous command succeeded

Script Execution

CommandDescription
chmod +x script.shMake script executable
./script.shRun executable script
bash script.shRun script with bash interpreter
sh script.shRun script with sh interpreter
./script.sh dev ./buildRun script with arguments
bash -n script.shCheck script syntax without executing

Useful Utilities

CommandDescription
cat file.txtDisplay entire file contents
less file.txtView file with scrollable interface
head -n 20 file.txtDisplay first 20 lines of file
tail -n 20 file.txtDisplay last 20 lines of file
grep "pattern" file.txtSearch for pattern in file
grep -r "pattern" directory/Search for pattern recursively in directory
find . -name "*.tf"Find files matching pattern
find . -type f -name "*.sh"Find files by type and name
df -hDisplay disk space usage
ls -l file.txtDisplay file permissions and details
historyDisplay command history
history | grep terraformSearch command history for pattern

📜 Helper Scripts (Optional Shortcuts)

CommandDescription
./scripts/init.shInitialize Terraform and setup workspace
./scripts/plan.sh devCreate execution plan for dev environment
./scripts/plan.sh stagingCreate execution plan for staging environment
./scripts/plan.sh prodCreate execution plan for production environment
./scripts/apply.sh devApply infrastructure changes to dev
./scripts/apply.sh stagingApply infrastructure changes to staging
./scripts/apply.sh prodApply infrastructure changes to production
./scripts/upload.sh dev ./buildUpload files to dev S3 bucket and invalidate cache
./scripts/upload.sh staging ./buildUpload files to staging S3 bucket and invalidate cache
./scripts/upload.sh prod ./buildUpload files to production S3 bucket and invalidate cache
./scripts/outputs.sh devDisplay Terraform outputs for dev environment
./scripts/outputs.sh stagingDisplay Terraform outputs for staging environment
./scripts/outputs.sh prodDisplay Terraform outputs for production environment
./scripts/destroy.sh devDestroy all resources in dev environment
./scripts/destroy.sh stagingDestroy all resources in staging environment
./scripts/destroy.sh prodDestroy all resources in production environment
./scripts/setup-remote-state.sh stagingSetup S3 and DynamoDB for remote state (staging)
./scripts/setup-remote-state.sh prodSetup S3 and DynamoDB for remote state (production)
./scripts/setup-iam-oidc.sh staging $AWS_ACCOUNT_ID repoCreate IAM role for GitHub Actions OIDC (staging)
./scripts/setup-iam-oidc.sh prod $AWS_ACCOUNT_ID repoCreate IAM role for GitHub Actions OIDC (production)

🔀 Git Commands

CommandDescription
git clone https://github.com/m-saad-siddique/static-site-IAC-deploy.gitClone repository from GitHub
cd static-site-IAC-deployNavigate to cloned repository
git statusCheck status of working directory
git add file.txtStage specific file for commit
git add .Stage all changes for commit
git commit -m "Message"Commit staged changes with message
git push origin branch-namePush commits to remote repository
git checkout -b feature-nameCreate and switch to new branch
git switch -c feature-nameCreate and switch to new branch (alternative)
git checkout branch-nameSwitch to existing branch
git switch branch-nameSwitch to existing branch (alternative)
git merge branch-nameMerge branch into current branch
git pull origin branch-namePull latest changes from remote
git logView commit history
git log --onelineView compact commit history

19. 📚 Additional Resources

Official Documentation

Learning Resources

Project Documentation

  • README.md – Main project documentation
  • WORKSPACES_GUIDE.md – Workspace management guide
  • GITHUB_ACTIONS_SETUP.md – CI/CD setup instructions
  • IAM_ROLE_GUIDE.md – IAM roles and policies explained
  • AWS_PROFILE_SETUP.md – Local AWS configuration

Static Site Deployment Tutorial

Learn Infrastructure as Code with Terraform + AWS S3 + CloudFront

Replace this file with your actual static site build after deployment

Complete Terraform Guide

25 thoughts on “Complete Terraform Workflow for AWS S3 & CloudFront”

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top