🚀 Static Site Deployment Tutorial
Learn Infrastructure as Code with Terraform + AWS S3 + CloudFront
📑 Table of Contents (Click to expand/collapse)
- 1. What is Terraform?
- 2. About This Project
- 3. Best Practices: Project Structure
- 4. How to Handle Environments & Workspaces
- 5. Understanding AWS Services
- 6. Code Snippets: Configuring Different Services
- 7. Configure AWS on Local Machine
- 8. Prerequisites & Setup
- 9. Step 1: Initialize Terraform
- 10. Step 2: Plan Your Deployment
- 11. Step 3: Apply Infrastructure
- 12. Step 4: Upload Your Static Site
- 13. Step 5: Verify Deployment
- 14. CI/CD Setup (Advanced)
- 15. Best Practices & Recommendations
- 16. Troubleshooting
- 17. Frequently Asked Questions (FAQ)
- 18. Command Reference
- 19. Additional Resources
Clone the project
git clone https://github.com/m-saad-siddique/static-site-IAC-deploy.gitGet started in 5 steps
Common issues & solutions
Quick command reference
Frequently asked questions
1. Quick Start: What is Terraform?
Terraform lets you create AWS infrastructure (S3, CloudFront, etc.) by writing code instead of clicking in the AWS Console.
Basic Workflow
# 1. Write infrastructure code (.tf files)
# 2. Run: terraform init # Downloads AWS provider
# 3. Run: terraform plan # Shows what will be created
# 4. Run: terraform apply # Actually creates resourcesQuick Example
Create an S3 bucket with code:
# main.tf
provider "aws" {
region = "us-east-1"
} resource "aws_s3_bucket" "my_bucket" {
bucket = "my-static-site-123"
} # Run: terraform init && terraform apply
# Result: S3 bucket created in AWS!Key Concepts (Quick Reference)
| Concept | Example | Purpose |
|---|---|---|
| Provider | provider "aws" | Connects to AWS |
| Resource | resource "aws_s3_bucket" | Creates AWS resource |
| Variable | variable "bucket_name" | Makes code reusable |
| Module | module "s3" { source = "./modules/s3" } | Reusable code blocks |
| Output | output "url" { value = ... } | Shows results after deploy |
2. What This Project Does
Clone the repository to get started:
git clone https://github.com/m-saad-siddique/static-site-IAC-deploy.gitThis project deploys static websites to AWS using Terraform. What you’ll actually do:
What Gets Created
(HTML/CSS/JS)
(Private Storage)
(Global Distribution)
(Your Live Site)
Project Files You’ll Use
webgl-deploy/
├── main.tf # Main config (uses modules)
├── variables.tf # Configurable values
├── environments/dev/ # Dev settings
│ └── terraform.tfvars # Edit this for your config
└── modules/ # Reusable code
├── s3/ # S3 bucket code
└── cloudfront/ # CloudFront codeQuick Workflow
# 1. Initialize Terraform
terraform init # 2. Plan deployment
terraform plan -var-file=environments/dev/terraform.tfvars # 3. Apply infrastructure
terraform apply -var-file=environments/dev/terraform.tfvars # 4. Upload files
BUCKET=$(terraform output -raw s3_bucket_id)
aws s3 sync ./build s3://$BUCKET/ --delete # 5. Get URL
terraform output deployment_url3. Project Structure (Quick Guide)
📁 File Organization
webgl-deploy/
├── main.tf # Uses modules to create resources
├── variables.tf # Configurable values
├── outputs.tf # Shows URLs after deploy
├── environments/
│ └── dev/terraform.tfvars # Your settings
└── modules/ # Reusable code
├── s3/ # Creates S3 bucket
└── cloudfront/ # Creates CloudFrontHow to Use Files
| File | What You Do |
|---|---|
environments/dev/terraform.tfvars | Edit this to change settings (bucket name, region, etc.) |
main.tf | Uses modules – usually don’t need to edit |
modules/s3/main.tf | Edit if you need to customize S3 bucket |
scripts/*.sh | Optional helper scripts (see manual steps in tutorial) |
Quick Example: Change Bucket Name
# Edit: environments/dev/terraform.tfvars
bucket_name = "my-custom-bucket-name" # Then deploy
./scripts/apply.sh devEssential Rules
- ✅ Never commit
terraform.tfvarswith real secrets - ✅ Always run
terraform planbeforeapply - ✅ Use modules – Don’t put everything in one file
- ✅ Format code – Run
terraform fmtbefore commit
4. Working with Environments
Use different environments (dev, staging, prod) with separate infrastructure. Workspaces keep them isolated.
Quick Example
# Deploy to dev
terraform workspace select dev
terraform apply -var-file=environments/dev/terraform.tfvars # Deploy to staging
terraform workspace select staging
terraform apply -var-file=environments/staging/terraform.tfvars # Deploy to prod
terraform workspace select prod
terraform apply -var-file=environments/prod/terraform.tfvars # Each environment is separate - safe to destroy one!How Workspaces Work
# Workspace = Separate state file per environment
dev workspace → .terraform/terraform.tfstate.d/dev/
staging workspace → S3: terraform-state-staging/...
prod workspace → S3: terraform-state-prod/... # Scripts handle this automatically - you just specify environment!Environment Configuration Files
# Edit: environments/dev/terraform.tfvars
environment = "dev"
aws_region = "us-east-1"
bucket_name = "my-site-dev" # Edit: environments/prod/terraform.tfvars
environment = "prod"
aws_region = "us-east-1"
bucket_name = "my-site-prod"Practical Commands
# Deploy to different environments
terraform workspace select dev
terraform apply -var-file=environments/dev/terraform.tfvars terraform workspace select staging
terraform apply -var-file=environments/staging/terraform.tfvars terraform workspace select prod
terraform apply -var-file=environments/prod/terraform.tfvars # View what's deployed
terraform workspace select dev
terraform output # Destroy specific environment
terraform workspace select dev
terraform destroy -var-file=environments/dev/terraform.tfvarsManual Workspace Commands (If Needed)
terraform workspace list # List all
terraform workspace show # Current workspace
terraform workspace select dev # Switch workspace5. AWS Services (What They Do)
Quick Overview
| Service | What It Does |
|---|---|
| S3 | Stores your files (HTML, CSS, JS) |
| CloudFront | Serves files globally (CDN) |
| OAC | Security – only CloudFront can access S3 |
How It Works (Simple)
Why Private S3 + CloudFront?
- ✅ Security – S3 bucket is private (can’t be accessed directly)
- ✅ Speed – CloudFront CDN is faster than direct S3
- ✅ HTTPS – Free SSL certificate from CloudFront
- ✅ Global – Files cached worldwide for fast access
What You Need to Know
# S3 Bucket
- Stores your static files
- Private (not publicly accessible)
- Encrypted by default # CloudFront
- Global CDN (Content Delivery Network)
- Serves files from S3
- Provides HTTPS URL
- Takes 15-20 min to deploy # OAC (Origin Access Control)
- Security feature
- Only CloudFront can access S3
- Configured automatically6. Code Snippets: Configuring Different Services
Now let’s look at actual Terraform code for configuring each service. These examples show you how to write infrastructure as code.
📦 1. S3 Bucket Configuration
Here’s how to create a private S3 bucket with security best practices. Note: These are code examples showing Terraform syntax. This project uses modules (see section 6).
Basic S3 Bucket
# Create S3 bucket
resource "aws_s3_bucket" "webgl_bucket" {
bucket = "${var.project_name}-${var.environment}-${random_id.bucket_suffix.hex}"
# Allow bucket deletion even with objects (useful for dev/staging)
force_destroy = true
tags = {
Environment = var.environment
ManagedBy = "Terraform"
}
} # Block all public access
resource "aws_s3_bucket_public_access_block" "webgl_bucket_pab" {
bucket = aws_s3_bucket.webgl_bucket.id block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}Enable Encryption
# Server-side encryption
resource "aws_s3_bucket_server_side_encryption_configuration" "webgl_bucket_encryption" {
bucket = aws_s3_bucket.webgl_bucket.id rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256" # Free, managed by AWS
}
}
}Enable Versioning
# Enable versioning for rollback capability
resource "aws_s3_bucket_versioning" "webgl_bucket_versioning" {
bucket = aws_s3_bucket.webgl_bucket.id versioning_configuration {
status = var.enable_versioning ? "Enabled" : "Disabled"
}
}Configure CORS (for WebGL/Web Apps)
# CORS configuration for cross-origin requests
resource "aws_s3_bucket_cors_configuration" "webgl_bucket_cors" {
bucket = aws_s3_bucket.webgl_bucket.id cors_rule {
allowed_origins = ["*"] # Or specify your domain
allowed_methods = ["GET", "HEAD"]
allowed_headers = ["*"]
max_age_seconds = 3600
}
}S3 Bucket Policy (Allow CloudFront Access)
# Bucket policy that allows only CloudFront to access S3
resource "aws_s3_bucket_policy" "webgl_bucket_policy" {
bucket = aws_s3_bucket.webgl_bucket.id policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Sid = "AllowCloudFrontServicePrincipal"
Effect = "Allow"
Principal = {
Service = "cloudfront.amazonaws.com"
}
Action = "s3:GetObject"
Resource = "${aws_s3_bucket.webgl_bucket.arn}/*"
Condition = {
StringEquals = {
"AWS:SourceArn" = var.cloudfront_distribution_arn
}
}
}
]
}) depends_on = [aws_s3_bucket_public_access_block.webgl_bucket_pab]
}🌐 2. CloudFront Distribution Configuration
Here’s how to create a CloudFront distribution with Origin Access Control. Note: These are code examples. This project uses modules (see section 6).
Create Origin Access Control (OAC)
# Origin Access Control - modern replacement for OAI
resource "aws_cloudfront_origin_access_control" "webgl_oac" {
name = "${var.distribution_name}-oac"
description = "OAC for secure S3 access"
origin_access_control_origin_type = "s3"
signing_behavior = "always"
signing_protocol = "sigv4"
}CloudFront Distribution
# CloudFront distribution
resource "aws_cloudfront_distribution" "webgl_distribution" {
enabled = true
is_ipv6_enabled = true
default_root_object = "index.html"
comment = "WebGL Distribution for ${var.environment}" # Origin - S3 bucket with OAC
origin {
domain_name = aws_s3_bucket.webgl_bucket.bucket_regional_domain_name
origin_id = "S3-${aws_s3_bucket.webgl_bucket.id}"
origin_access_control_id = aws_cloudfront_origin_access_control.webgl_oac.id
} # Default cache behavior
default_cache_behavior {
target_origin_id = "S3-${aws_s3_bucket.webgl_bucket.id}"
# Redirect HTTP to HTTPS
viewer_protocol_policy = "redirect-to-https"
allowed_methods = ["GET", "HEAD", "OPTIONS"]
cached_methods = ["GET", "HEAD"]
# Use managed cache policy
cache_policy_id = "658327ea-f89d-4fab-a63d-7e88639e58f6" # CachingOptimized
# Compress objects automatically
compress = true
# Cache TTL settings
min_ttl = 0
default_ttl = 3600 # 1 hour
max_ttl = 86400 # 1 day
} # Custom error responses for SPA routing
custom_error_response {
error_code = 403
response_code = 200
response_page_path = "/index.html"
error_caching_min_ttl = 300
} custom_error_response {
error_code = 404
response_code = 200
response_page_path = "/index.html"
error_caching_min_ttl = 300
} # Viewer certificate (use CloudFront default)
viewer_certificate {
cloudfront_default_certificate = true
} # Restrictions
restrictions {
geo_restriction {
restriction_type = "none"
}
} tags = {
Environment = var.environment
ManagedBy = "Terraform"
}
}🔐 3. IAM Roles and Policies
Here’s how to create IAM roles for CI/CD deployments:
IAM Role for GitHub Actions (OIDC)
# IAM role that can be assumed by GitHub Actions
resource "aws_iam_role" "deployment_role" {
name = "${var.project_name}-${var.environment}-deployment-role" assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Principal = {
Federated = "arn:aws:iam::${var.aws_account_id}:oidc-provider/token.actions.githubusercontent.com"
}
Action = "sts:AssumeRoleWithWebIdentity"
Condition = {
StringEquals = {
"token.actions.githubusercontent.com:aud" = "sts.amazonaws.com"
}
StringLike = {
"token.actions.githubusercontent.com:sub" = "repo:${var.github_repo}:*"
}
}
}
]
}) tags = {
Environment = var.environment
ManagedBy = "Terraform"
}
}IAM Policy for Deployment
# IAM policy with permissions for S3 and CloudFront
resource "aws_iam_role_policy" "deployment_policy" {
name = "${var.project_name}-${var.environment}-deployment-policy"
role = aws_iam_role.deployment_role.id policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Action = [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:ListBucket"
]
Resource = [
"${aws_s3_bucket.webgl_bucket.arn}",
"${aws_s3_bucket.webgl_bucket.arn}/*"
]
},
{
Effect = "Allow"
Action = [
"cloudfront:CreateInvalidation",
"cloudfront:GetInvalidation",
"cloudfront:ListInvalidations"
]
Resource = "${aws_cloudfront_distribution.webgl_distribution.arn}"
}
]
})
}📝 4. Variables Configuration
Define variables to make your code reusable:
Variable Definitions
# variables.tf # Project name
variable "project_name" {
description = "Name of the project"
type = string
default = "webgl"
} # Environment
variable "environment" {
description = "Environment name (dev, staging, prod)"
type = string
validation {
condition = contains(["dev", "staging", "prod"], var.environment)
error_message = "Environment must be one of: dev, staging, prod"
}
} # AWS Region
variable "aws_region" {
description = "AWS region for resources"
type = string
default = "us-east-1"
} # S3 Versioning
variable "enable_s3_versioning" {
description = "Enable S3 bucket versioning"
type = bool
default = false
} # CloudFront Cache TTL
variable "default_ttl" {
description = "Default TTL for CloudFront cache (in seconds)"
type = number
default = 3600 # 1 hour
} # Common Tags
variable "common_tags" {
description = "Common tags for all resources"
type = map(string)
default = {
Project = "WebGL Deployment"
ManagedBy = "Terraform"
}
}Environment-Specific Values
# environments/dev/terraform.tfvars
environment = "dev"
aws_region = "us-east-1"
enable_s3_versioning = false
default_ttl = 3600 # 1 hour (faster iteration) # environments/prod/terraform.tfvars
environment = "prod"
aws_region = "us-east-1"
enable_s3_versioning = true # Enable for rollback
default_ttl = 31536000 # 1 year (longer cache)📤 5. Outputs Configuration
Define outputs to get important values after deployment. Note: This project uses modules, so outputs reference module outputs:
# outputs.tf # S3 Bucket ID (from module)
output "s3_bucket_id" {
description = "Name of the S3 bucket"
value = module.s3.bucket_id
} # CloudFront Distribution Domain (from module)
output "cloudfront_distribution_domain_name" {
description = "CloudFront distribution domain name"
value = module.cloudfront.distribution_domain_name
} # CloudFront Distribution ID (from module)
output "cloudfront_distribution_id" {
description = "CloudFront distribution ID"
value = module.cloudfront.distribution_id
} # Full Deployment URL
output "deployment_url" {
description = "Full HTTPS URL to access the site"
value = "https://${module.cloudfront.distribution_domain_name}"
} # Example: Direct resource output (if not using modules)
# output "s3_bucket_id" {
# value = aws_s3_bucket.webgl_bucket.id
# }🧩 6. Using Modules
Organize code into reusable modules:
Module Structure
modules/
├── s3/
│ ├── main.tf # S3 resources
│ ├── variables.tf # Module inputs
│ └── outputs.tf # Module outputs
└── cloudfront/
├── main.tf
├── variables.tf
└── outputs.tfUsing a Module
# main.tf - Root module # Use S3 module
module "s3" {
source = "./modules/s3" bucket_name = "${var.project_name}-${var.environment}"
environment = var.environment
cloudfront_distribution_arn = module.cloudfront.distribution_arn
enable_versioning = var.enable_s3_versioning
enable_cors = true
common_tags = var.common_tags
} # Use CloudFront module
module "cloudfront" {
source = "./modules/cloudfront" distribution_name = "${var.project_name}-${var.environment}"
s3_bucket_regional_domain_name = module.s3.bucket_regional_domain_name
default_root_object = var.default_root_object
default_ttl = var.default_ttl
common_tags = var.common_tags
}🔧 7. Provider Configuration
Configure the AWS provider:
# versions.tf terraform {
required_version = ">= 1.0" required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
random = {
source = "hashicorp/random"
version = "~> 3.0"
}
}
} # Provider configuration
provider "aws" {
region = var.aws_region
profile = var.aws_profile != "" ? var.aws_profile : null default_tags {
tags = var.common_tags
}
}🎯 8. Complete Example: Main Configuration
Here’s a complete example combining everything:
# main.tf - Complete example # Random ID for unique bucket names
resource "random_id" "bucket_suffix" {
byte_length = 4
} # S3 Bucket Module
module "s3" {
source = "./modules/s3" bucket_name = "${var.project_name}-${var.environment}-${random_id.bucket_suffix.hex}"
environment = var.environment
cloudfront_distribution_arn = module.cloudfront.distribution_arn
enable_versioning = var.enable_s3_versioning
enable_cors = true
common_tags = var.common_tags
} # CloudFront Module
module "cloudfront" {
source = "./modules/cloudfront" distribution_name = "${var.project_name}-${var.environment}"
environment = var.environment
unique_suffix = random_id.bucket_suffix.hex
s3_bucket_regional_domain_name = module.s3.bucket_regional_domain_name
default_root_object = var.default_root_object
default_ttl = var.default_ttl
common_tags = var.common_tags
} # Outputs
output "cloudfront_url" {
value = "https://${module.cloudfront.distribution_domain_name}"
} output "s3_bucket_name" {
value = module.s3.bucket_id
}💡 Best Practices in Code
- ✅ Use Variables – Don’t hardcode values
- ✅ Add Descriptions – Document all variables
- ✅ Use Modules – Organize code into reusable components
- ✅ Tag Resources – Add tags for organization
- ✅ Validate Inputs – Use validation blocks
- ✅ Use Locals – Calculate values once, reuse multiple times
- ✅ Add Dependencies – Use
depends_onwhen needed
7. Configure AWS on Local Machine
Before you can deploy infrastructure with Terraform, you need to configure AWS authentication on your local machine. This section covers everything you need to set up AWS CLI and authenticate with your AWS account.
📋 Prerequisites
- AWS Account – You need an active AWS account
- AWS CLI – Command-line tool for interacting with AWS
- IAM User with Access Keys – For programmatic access
🔧 Step 1: Install AWS CLI
First, install the AWS Command Line Interface (CLI) on your machine.
macOS Installation
# Using Homebrew (recommended)
brew install awscli # Or using pip
pip3 install awscliLinux Installation
# Download and install
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/installWindows Installation
Download the AWS CLI MSI installer from aws.amazon.com/cli and run it.
Verify Installation
# Check AWS CLI version
aws --version # Should output something like: aws-cli/2.x.x👤 Step 2: Create IAM User and Access Keys
You need an IAM user with programmatic access (access keys) to use with Terraform.
Create IAM User in AWS Console
- Log in to AWS Console
- Navigate to IAM → Users
- Click Create user
- Enter username (e.g.,
terraform-deploy) - Select Provide user access to the AWS Management Console (optional) or Access key – Programmatic access
- Click Next
Attach Permissions
Attach policies that grant necessary permissions. For this project, you need:
- S3 – Full access (for bucket creation and file uploads)
- CloudFront – Full access (for distribution creation and invalidation)
- IAM – Limited access (for role creation, if using CI/CD)
# Recommended: Attach these managed policies
- AmazonS3FullAccess
- CloudFrontFullAccess
- IAMFullAccess (or create custom policy with least privilege)Create Access Keys
- After creating the user, go to the Security credentials tab
- Scroll to Access keys section
- Click Create access key
- Select Command Line Interface (CLI) as the use case
- Click Next and then Create access key
- IMPORTANT: Download or copy both:
- Access Key ID
- Secret Access Key
⚙️ Step 3: Configure AWS CLI Profile
This project uses an AWS profile named deploy-config for authentication. You can configure it interactively or non-interactively.
Method 1: Interactive Setup (Recommended)
This is the easiest method – AWS CLI will prompt you for each value:
aws configure --profile deploy-configYou will be prompted to enter:
- AWS Access Key ID: Your access key ID (e.g.,
AKIAIOSFODNN7EXAMPLE) - AWS Secret Access Key: Your secret access key
- Default region name: e.g.,
us-east-1(recommended for CloudFront) - Default output format:
json(recommended)
Method 2: Non-Interactive Setup
Use this method if you want to configure via command line or scripts:
# Set access key ID
aws configure set aws_access_key_id YOUR_ACCESS_KEY_ID --profile deploy-config # Set secret access key
aws configure set aws_secret_access_key YOUR_SECRET_ACCESS_KEY --profile deploy-config # Set default region
aws configure set region us-east-1 --profile deploy-config # Set output format
aws configure set output json --profile deploy-configYOUR_ACCESS_KEY_ID and YOUR_SECRET_ACCESS_KEY with your actual keys from Step 2.Method 3: Manual Configuration Files
You can also edit the configuration files directly:
Edit Credentials File
# Open credentials file
nano ~/.aws/credentials # or use your preferred editor # Add the following section:
[deploy-config]
aws_access_key_id = YOUR_ACCESS_KEY_ID
aws_secret_access_key = YOUR_SECRET_ACCESS_KEYEdit Config File
# Open config file
nano ~/.aws/config # or use your preferred editor # Add the following section:
[profile deploy-config]
region = us-east-1
output = json.gitignore.✅ Step 4: Verify AWS Configuration
Verify that your AWS profile is configured correctly:
List All Profiles
# List all configured profiles
aws configure list-profilesOutput should include deploy-config:
default
deploy-configCheck Profile Configuration
# View profile settings
aws configure list --profile deploy-configTest Authentication
# Test the profile - this should return your AWS account info
aws sts get-caller-identity --profile deploy-configExpected output:
{
"UserId": "AIDAXXXXXXXXXXXXXXXXX",
"Account": "123456789012",
"Arn": "arn:aws:iam::123456789012:user/terraform-deploy"
}🔍 Step 5: Test AWS Access
Test that you have the necessary permissions:
Test S3 Access
# List S3 buckets (should work if you have S3 permissions)
aws s3 ls --profile deploy-configTest IAM Access
# Get your IAM user info
aws iam get-user --profile deploy-config🌍 Using Different AWS Regions
You can configure different regions for different profiles or override the region per command:
# Use a different region for a specific command
aws s3 ls --profile deploy-config --region eu-west-1 # Or set region in environment variable
export AWS_DEFAULT_REGION=eu-west-1
aws s3 ls --profile deploy-config🔄 Using a Different Profile Name
If you want to use a different profile name instead of deploy-config:
# Set environment variable before running scripts
export AWS_PROFILE_NAME=my-custom-profile # Then run your scripts normally
./scripts/plan.sh dev📁 Configuration File Locations
AWS CLI stores configuration in these files:
| File | Location | Purpose |
|---|---|---|
credentials | ~/.aws/credentials | Stores access keys (sensitive!) |
config | ~/.aws/config | Stores region and output format |
🔒 Security Best Practices
- ✅ Never Commit Credentials – Add
~/.aws/to.gitignore - ✅ Use Least Privilege – Grant only necessary permissions
- ✅ Rotate Access Keys – Change keys regularly (every 90 days recommended)
- ✅ Use IAM Roles – For production, prefer IAM roles over access keys
- ✅ Enable MFA – Use Multi-Factor Authentication for your AWS account
- ✅ Monitor Usage – Check CloudTrail logs for unusual activity
- ✅ Separate Accounts – Use different AWS accounts for dev/staging/prod
🔧 Troubleshooting
Issue: “Unable to locate credentials”
Solution: Make sure you’ve configured the profile:
aws configure --profile deploy-configIssue: “Access Denied”
Solution: Check that your IAM user has the necessary permissions. Verify with:
aws iam list-attached-user-policies --user-name terraform-deploy --profile deploy-configIssue: “Invalid credentials”
Solution: Your access keys may have been rotated or deleted. Create new access keys in the IAM console and reconfigure:
aws configure --profile deploy-configIssue: “Profile not found”
Solution: Verify the profile exists:
# List profiles
aws configure list-profiles # If deploy-config is missing, configure it
aws configure --profile deploy-config📋 Quick Reference
# Configure AWS profile
aws configure --profile deploy-config # Verify profile
aws sts get-caller-identity --profile deploy-config # List all profiles
aws configure list-profiles # View profile configuration
aws configure list --profile deploy-config # Test S3 access
aws s3 ls --profile deploy-config # Use different region
aws s3 ls --profile deploy-config --region eu-west-18. Prerequisites & Setup
Now that AWS is configured, let’s set up the rest of your local environment:
Prerequisites
- AWS CLI – ✅ Should be installed and configured (see previous section)
- Terraform – Install from terraform.io/downloads
- Git – For cloning the repository
Install Terraform
macOS Installation
# Using Homebrew (recommended)
brew install terraform # Verify installation
terraform versionLinux Installation
# Download Terraform
wget https://releases.hashicorp.com/terraform/1.6.0/terraform_1.6.0_linux_amd64.zip
unzip terraform_1.6.0_linux_amd64.zip
sudo mv terraform /usr/local/bin/ # Verify installation
terraform versionWindows Installation
Download Terraform from terraform.io/downloads and add it to your PATH.
Clone Repository
Clone the project repository from GitHub:
git clone https://github.com/m-saad-siddique/static-site-IAC-deploy.git
cd static-site-IAC-deployVerify Setup
# Check AWS CLI
aws --version # Check Terraform
terraform version # Verify AWS profile
aws sts get-caller-identity --profile deploy-config📋 Deployment Workflow – Click any step to jump there
💡 Tip: Follow these steps in order for a successful deployment
9. Step 1: Initialize Terraform
Before you can deploy anything, you need to initialize Terraform. This downloads the AWS provider plugin and sets up your workspace.
Manual Steps
Run these commands to initialize Terraform:
# Set AWS profile
export AWS_PROFILE=deploy-config # Initialize Terraform (downloads AWS provider)
terraform init # Check current workspace
terraform workspace showExpected Output
Initializing Terraform...
Downloading AWS provider...
Terraform has been successfully initialized! Current workspace: defaultQuick Script (Optional)
Or use the helper script that does the same:
./scripts/init.sh10. Step 2: Plan Your Deployment
Before creating resources, always run a plan. This shows you exactly what Terraform will create, modify, or destroy in AWS.
Manual Steps
Create a plan to preview what Terraform will create:
# Set AWS profile
export AWS_PROFILE=deploy-config # Initialize (if using remote backend)
terraform init -backend-config=backend/dev.hcl # Select or create workspace
terraform workspace select dev || terraform workspace new dev # Create plan with environment variables
terraform plan -var-file=environments/dev/terraform.tfvars -out=tfplan-dev.outWhat You’ll See
Terraform will perform the following actions: # module.s3.aws_s3_bucket.main will be created
+ resource "aws_s3_bucket" "main" {
+ bucket = "webgl-deploy-dev-abc123"
} # module.cloudfront.aws_cloudfront_distribution.main will be created
+ resource "aws_cloudfront_distribution" "main" {
+ domain_name = "..."
} Plan: 5 to add, 0 to change, 0 to destroy.Understanding the Symbols
+ = Will be created
- = Will be destroyed
~ = Will be modified
-/+ = Will be destroyed and recreatedQuick Script (Optional)
Or use the helper script:
./scripts/plan.sh dev11. Step 3: Apply Infrastructure
After reviewing the plan, apply it to actually create the resources in AWS. This will create your S3 bucket and CloudFront distribution.
Manual Steps
Apply the plan to create resources in AWS:
# Set AWS profile
export AWS_PROFILE=deploy-config # Select workspace
terraform workspace select dev # Apply using saved plan (recommended)
terraform apply tfplan-dev.out # OR apply directly (will prompt for confirmation)
terraform apply -var-file=environments/dev/terraform.tfvars # View outputs
terraform outputWhat Happens
# Terraform will:
1. Create S3 bucket (takes ~10 seconds)
2. Create CloudFront distribution (takes ~2 minutes)
3. Configure security (OAC, policies)
4. Show you the URLsExpected Output
aws_s3_bucket.main: Creating...
aws_s3_bucket.main: Creation complete
aws_cloudfront_distribution.main: Creating...
aws_cloudfront_distribution.main: Creation complete Apply complete! Resources: 5 added. Outputs:
s3_bucket_id = "webgl-deploy-dev-abc123"
deployment_url = "https://d1234567890.cloudfront.net"Quick Script (Optional)
Or use the helper script:
./scripts/apply.sh dev12. Step 4: Upload Your Static Site
Now that your infrastructure is ready, upload your static website files (HTML, CSS, JS) to the S3 bucket. Then invalidate the CloudFront cache so users see the latest version.
Manual Steps
Upload your static site files to S3 and invalidate CloudFront cache:
# Set AWS profile
export AWS_PROFILE=deploy-config # Select workspace
terraform workspace select dev # Get bucket name from Terraform output
BUCKET=$(terraform output -raw s3_bucket_id) # Upload files to S3 (replace ./build with your build directory)
aws s3 sync ./build s3://$BUCKET/ --delete # Get CloudFront distribution ID
DIST_ID=$(terraform output -raw cloudfront_distribution_id) # Invalidate CloudFront cache
aws cloudfront create-invalidation \
--distribution-id $DIST_ID \
--paths "/*"Common Build Directories
# React/Vue/Angular
aws s3 sync ./build s3://$BUCKET/ --delete
aws s3 sync ./dist s3://$BUCKET/ --delete # WebGL (Unity/Unreal)
aws s3 sync ./WebGLBuild s3://$BUCKET/ --delete # Static HTML
aws s3 sync ./public s3://$BUCKET/ --deleteExpected Output
upload: build/index.html to s3://webgl-deploy-dev-abc123/index.html
upload: build/style.css to s3://webgl-deploy-dev-abc123/style.css
...
CloudFront invalidation created: I1234567890Quick Script (Optional)
Or use the helper script:
./scripts/upload.sh dev ./build13. Step 5: Verify Deployment
Finally, get your CloudFront URL and verify that your site is live and working correctly.
Manual Steps
Get your deployment URL and verify everything works:
# Set AWS profile
export AWS_PROFILE=deploy-config # Select workspace
terraform workspace select dev # View all outputs (includes your CloudFront URL)
terraform output # Get just the deployment URL
terraform output -raw deployment_url # List files in S3 bucket
BUCKET=$(terraform output -raw s3_bucket_id)
aws s3 ls s3://$BUCKET/ --recursiveExpected Output
s3_bucket_id = "webgl-deploy-dev-abc123"
deployment_url = "https://d1234567890.cloudfront.net" # Copy the deployment_url and open in browser!Quick Tests
# 1. Open URL in browser
# 2. Check it loads (may take 15-20 min for CloudFront)
# 3. Check HTTPS (should be secure)
# 4. Test all pages workQuick Script (Optional)
Or use the helper script:
./scripts/outputs.sh dev14. Configure CI/CD with GitHub Actions
Set up automated deployments using GitHub Actions. This allows your infrastructure and site to deploy automatically when you push code to GitHub, without storing AWS credentials in GitHub.
🎯 What is CI/CD?
CI/CD (Continuous Integration/Continuous Deployment) automates your deployment process:
- ✅ Automatic Deployment – Deploys when you push code
- ✅ No Manual Steps – No need to run scripts locally
- ✅ Secure – Uses OIDC (OpenID Connect) instead of storing AWS keys
- ✅ Consistent – Same process every time
- ✅ Traceable – All deployments logged in GitHub Actions
📋 Prerequisites
Before setting up CI/CD, ensure you have:
- ✅ AWS account configured locally (see Section 7)
- ✅ GitHub repository with your code
- ✅ Admin access to the GitHub repository
- ✅ Terraform infrastructure already deployed (or ready to deploy)
🔧 Step 1: Set Up Remote State Backend
CI/CD requires remote state (S3 + DynamoDB) so multiple deployments can share the same state file.
Create Remote State Resources
Use the helper script to create S3 bucket and DynamoDB table for Terraform state:
# For staging environment
./scripts/setup-remote-state.sh staging # For production environment
./scripts/setup-remote-state.sh prodThis script:
- Creates S3 bucket for state storage
- Creates DynamoDB table for state locking
- Updates
backend/staging.hclorbackend/prod.hclautomatically
Verify Backend Configuration
Check that the backend files are updated:
# Check staging backend
cat backend/staging.hcl # Should show:
# bucket = "terraform-state-staging-..."
# key = "static-site-deploy/staging/terraform.tfstate"
# region = "us-east-1"
# dynamodb_table = "terraform-state-lock-staging-..."
# encrypt = true🔐 Step 2: Create IAM Roles with OIDC
Create IAM roles that GitHub Actions can assume using OIDC (no AWS keys needed!).
Get Your AWS Account ID
# Get your AWS account ID
AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
echo $AWS_ACCOUNT_IDGet Your GitHub Repository
Your repository should be in the format: owner/repository-name
Example: m-saad-siddique/static-site-IAC-deploy
Create IAM Roles
Run the setup script for each environment:
# Create staging IAM role
./scripts/setup-iam-oidc.sh staging $AWS_ACCOUNT_ID m-saad-siddique/static-site-IAC-deploy # Create production IAM role
./scripts/setup-iam-oidc.sh prod $AWS_ACCOUNT_ID m-saad-siddique/static-site-IAC-deployReplace:
$AWS_ACCOUNT_ID– Your AWS account ID from the previous stepm-saad-siddique/static-site-IAC-deploy– Your GitHub repository (owner/repo)
What the Script Creates
The script automatically creates:
- ✅ OIDC Provider – Connects AWS to GitHub (created once per AWS account)
- ✅ IAM Policy – Permissions for S3, CloudFront, and Terraform operations
- ✅ IAM Role – Can be assumed by GitHub Actions
- ✅ Trust Policy – Allows GitHub Actions to assume the role
Capture Role ARNs
The script will output the role ARN. Copy this value – you’ll need it for GitHub Secrets:
✅ IAM Role created successfully! Role ARN: arn:aws:iam::123456789012:role/webgl-staging-deployment-role 📝 Next steps:
1. Add this ARN to GitHub Secrets as: AWS_ROLE_ARN_STAGING
2. Add AWS_ACCOUNT_ID to GitHub Secrets: 123456789012🔑 Step 3: Configure GitHub Secrets
Add secrets to your GitHub repository so workflows can authenticate with AWS.
Navigate to GitHub Secrets
- Go to your GitHub repository
- Click Settings (top menu)
- Click Secrets and variables → Actions (left sidebar)
- Click New repository secret
Add Required Secrets
Add these three secrets:
Secret 1: AWS_ACCOUNT_ID
- Name:
AWS_ACCOUNT_ID - Value: Your AWS account ID (e.g.,
123456789012)
Secret 2: AWS_ROLE_ARN_STAGING
- Name:
AWS_ROLE_ARN_STAGING - Value: The staging role ARN from Step 2 (e.g.,
arn:aws:iam::123456789012:role/webgl-staging-deployment-role)
Secret 3: AWS_ROLE_ARN_PROD
- Name:
AWS_ROLE_ARN_PROD - Value: The production role ARN from Step 2 (e.g.,
arn:aws:iam::123456789012:role/webgl-prod-deployment-role)
📝 Step 4: Deploy Infrastructure with Terraform
Deploy your infrastructure using Terraform. The IAM roles are already created, so Terraform can use them.
Deploy Staging
# Plan staging deployment
./scripts/plan.sh staging # Apply staging infrastructure
./scripts/apply.sh stagingDeploy Production
# Plan production deployment
./scripts/plan.sh prod # Apply production infrastructure
./scripts/apply.sh prod⚙️ Step 5: Understand GitHub Actions Workflows
The project includes pre-configured workflows in .github/workflows/:
Workflow Files
| File | Triggers On | Environment |
|---|---|---|
deploy-staging.yml | Push to staging branch | Staging |
deploy-prod.yml | Push to main branch | Production |
What Workflows Do
Each workflow performs these steps:
- Checkout Code – Gets code from repository
- Configure AWS Credentials – Uses OIDC to assume IAM role
- Setup Terraform – Installs Terraform
- Initialize Terraform – Downloads providers, configures backend
- Select Workspace – Chooses environment workspace
- Plan & Apply – Creates/updates infrastructure
- Upload Files – Syncs build files to S3
- Invalidate CloudFront – Clears cache
Example Workflow Structure
name: Deploy to Staging on:
push:
branches: [staging] permissions:
id-token: write # Required for OIDC
contents: read jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ secrets.AWS_ROLE_ARN_STAGING }}
aws-region: us-east-1
- name: Setup Terraform
uses: hashicorp/setup-terraform@v3
- name: Terraform Init
run: terraform init -backend-config=backend/staging.hcl
- name: Terraform Apply
run: terraform apply -auto-approve🚀 Step 6: Test CI/CD
Test your CI/CD setup by pushing code to trigger a deployment.
Test Staging Deployment
# Make a small change (e.g., update a comment)
echo "# Test CI/CD" >> README.md # Commit and push to staging branch
git add README.md
git commit -m "Test CI/CD deployment"
git push origin stagingMonitor Deployment
- Go to your GitHub repository
- Click Actions tab
- You should see “Deploy to Staging” workflow running
- Click on the workflow to see detailed logs
Verify Deployment
Once the workflow completes:
- ✅ Check the workflow shows green checkmark (success)
- ✅ Verify your site is updated at the CloudFront URL
- ✅ Check workflow logs for any warnings
🔄 How CI/CD Works (OIDC Flow)
Here’s how the secure authentication works:
- No AWS keys stored in GitHub
- Temporary credentials (expire after 1 hour)
- Scoped to specific repository
- Can restrict to specific branches
- All actions logged in CloudTrail
🌿 Branch Strategy
This project uses a three-branch workflow:
| Branch | Environment | CI/CD | Purpose |
|---|---|---|---|
dev | Development | Optional | Local testing and development |
staging | Staging | ✅ Auto-deploy | Pre-production testing |
main | Production | ✅ Auto-deploy | Live production environment |
🔧 Step 7: Customize Workflows (Optional)
You can customize workflows to match your needs:
Change Build Directory
If your build files are in a different directory:
# In .github/workflows/deploy-staging.yml
# Find the "Upload to S3" step and change:
- name: Upload to S3
run: |
aws s3 sync ./your-build-directory s3://$BUCKET/ --deleteAdd Build Step
If you need to build your site before deploying:
# Add before "Upload to S3" step:
- name: Build Static Site
run: |
npm install
npm run build
# Or your build commandsAdd Notifications
Get notified when deployments complete:
# Add at the end of workflow:
- name: Notify on Success
if: success()
run: |
echo "✅ Deployment successful!"
# Add Slack/Discord/email notification here🔍 Troubleshooting CI/CD
Issue: “Not authorized to perform sts:AssumeRoleWithWebIdentity”
Solution:
- Verify OIDC provider exists:
aws iam list-open-id-connect-providers - Check repository name matches in IAM role trust policy
- Ensure
id-token: writepermission is in workflow
Issue: “Role ARN not found”
Solution:
- Verify GitHub Secret
AWS_ROLE_ARN_STAGINGorAWS_ROLE_ARN_PRODexists - Check the ARN is correct (no typos)
- Verify the role exists in AWS:
aws iam get-role --role-name webgl-staging-deployment-role
Issue: “State lock error”
Solution:
- Another deployment might be running – wait for it to complete
- If stuck, manually unlock: Check DynamoDB table for lock entry
Issue: “Backend configuration not found”
Solution:
- Verify
backend/staging.hclorbackend/prod.hclexists - Check the file contains valid bucket and table names
- Ensure remote state resources were created with
setup-remote-state.sh
Issue: “Workflow not triggering”
Solution:
- Check workflow file is in
.github/workflows/directory - Verify branch name matches workflow trigger (e.g.,
stagingormain) - Check GitHub Actions is enabled in repository settings
📋 Quick Reference
# 1. Setup remote state
./scripts/setup-remote-state.sh staging
./scripts/setup-remote-state.sh prod # 2. Get AWS account ID
AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text) # 3. Create IAM roles
./scripts/setup-iam-oidc.sh staging $AWS_ACCOUNT_ID m-saad-siddique/static-site-IAC-deploy
./scripts/setup-iam-oidc.sh prod $AWS_ACCOUNT_ID m-saad-siddique/static-site-IAC-deploy # 4. Add GitHub Secrets:
# - AWS_ACCOUNT_ID
# - AWS_ROLE_ARN_STAGING
# - AWS_ROLE_ARN_PROD # 5. Deploy infrastructure
./scripts/apply.sh staging
./scripts/apply.sh prod # 6. Test CI/CD
git push origin staging # Triggers staging deployment
git push origin main # Triggers production deployment✅ CI/CD Checklist
- ✅ Remote state backend configured (S3 + DynamoDB)
- ✅ IAM roles created with OIDC
- ✅ GitHub Secrets added (AWS_ACCOUNT_ID, role ARNs)
- ✅ Infrastructure deployed with Terraform
- ✅ Workflow files in
.github/workflows/ - ✅ Tested deployment by pushing to branch
- ✅ Verified deployment in GitHub Actions
staging or main will automatically deploy your infrastructure and site.Workflows & Different Environments
This project supports multiple environments (dev, staging, production) with separate infrastructure for each.
Branch Strategy
This project uses a three-branch workflow:
| Branch | Environment | Deployment | Purpose |
|---|---|---|---|
dev | Development | Auto (on push) | Fast iteration, testing |
staging | Staging | Auto (on push) | Pre-production testing |
main | Production | Auto (with optional approval) | Live production environment |
Deployment Workflow
git push origin devTerraform Workspaces
Each environment uses a separate Terraform workspace for state isolation:
- dev workspace – Manages dev environment state
- staging workspace – Manages staging environment state
- prod workspace – Manages production environment state
Managing Multiple Environments
Deploy to Specific Environment
# Deploy dev
./scripts/plan.sh dev
./scripts/apply.sh dev # Deploy staging
./scripts/plan.sh staging
./scripts/apply.sh staging # Deploy prod
./scripts/plan.sh prod
./scripts/apply.sh prodView Environment Outputs
./scripts/outputs.sh dev # Dev environment
./scripts/outputs.sh staging # Staging environment
./scripts/outputs.sh prod # Production environmentDestroy Specific Environment
# Destroy only staging (dev and prod remain)
./scripts/destroy.sh staging # Destroy only dev
./scripts/destroy.sh devBranch Protection Rules
To enforce the workflow, set up branch protection in GitHub:
- Go to Repository Settings → Branches
- Add rule for
staging:- Require pull request before merging
- Do not allow deleting this branch
- Restrict direct pushes
- Add rule for
main:- Require pull request before merging
- Require approvals (optional)
- Do not allow deleting this branch
- Restrict direct pushes
- Add rule for
dev:- Do not allow deleting this branch
branch-protection.yml workflow automatically validates PR rules. It blocks invalid merges (e.g., direct PR to main from feature branch).Environment Configuration
Each environment has its own configuration file:
environments/dev/terraform.tfvars– Dev settingsenvironments/staging/terraform.tfvars– Staging settingsenvironments/prod/terraform.tfvars– Production settings
Key differences:
| Setting | Dev | Staging | Prod |
|---|---|---|---|
| IAM Roles | Disabled | Enabled (OIDC) | Enabled (OIDC) |
| Cache TTL | 1 hour | 1 day | 1 year |
| Versioning | Disabled | Enabled | Enabled |
15. Essential Best Practices
🔒 Security (Must Do)
# 1. Never commit secrets
echo "*.tfvars" >> .gitignore
echo ".aws/" >> .gitignore # 2. Always plan before apply
terraform plan -var-file=environments/dev/terraform.tfvars
terraform apply -var-file=environments/dev/terraform.tfvars # 3. Use separate environments
terraform workspace select dev
terraform apply -var-file=environments/dev/terraform.tfvars📝 Code Quality (Quick Wins)
# Format code before committing
terraform fmt # Validate syntax
terraform validate # Check what will change
terraform plan✅ Quick Checklist
Before deploying:
□ Run terraform fmt
□ Run terraform validate
□ Run terraform plan (review output)
□ Test in dev first
□ Never commit .tfvars with secrets16. Troubleshooting
Common issues and how to resolve them:
Common Issues
Issue: “AWS profile ‘deploy-config’ is not configured”
Solution:
aws configure --profile deploy-configIssue: “Workspace doesn’t exist”
Solution: Workspace is created automatically on first plan/apply. Just run:
./scripts/plan.sh devIssue: “Unable to determine S3 bucket name”
Solution: Make sure you’ve run terraform apply first:
./scripts/apply.sh devIssue: CI/CD workflow fails with “Role not found”
Solution:
- Verify IAM roles are created:
./scripts/outputs.sh staging - Check GitHub Secrets are set correctly
- Verify OIDC is configured in
terraform.tfvars
Issue: CloudFront shows “Distribution not ready”
Solution: Wait 15-20 minutes. CloudFront distributions take time to deploy globally.
Useful Commands
# List all workspaces
terraform workspace list # Show current workspace
terraform workspace show # Switch workspace manually
terraform workspace select staging # View Terraform state
terraform state list # Check AWS profile
aws sts get-caller-identity --profile deploy-config17. Frequently Asked Questions (FAQ)
Common questions and answers about Terraform, AWS, and this deployment project.
🤔 General Questions
Q: What is Infrastructure as Code (IaC)?
A: Infrastructure as Code is the practice of managing and provisioning infrastructure through machine-readable definition files, rather than through manual processes. Terraform is an IaC tool that lets you define infrastructure in code and version control it.
Q: Why use Terraform instead of AWS Console?
A: Terraform provides:
- ✅ Version control for infrastructure
- ✅ Reproducible deployments
- ✅ Ability to review changes before applying
- ✅ Documentation of your infrastructure
- ✅ Multi-cloud support
- ✅ Team collaboration
Q: Do I need to know programming to use Terraform?
A: No! Terraform uses HCL (HashiCorp Configuration Language), which is declarative and easy to learn. You describe what you want, and Terraform figures out how to create it.
🔧 Terraform Questions
Q: What’s the difference between terraform plan and terraform apply?
A:
terraform plan: Shows what changes will be made without actually making them. Safe to run multiple times.terraform apply: Actually creates, modifies, or destroys resources. This makes real changes to your infrastructure.
Always run plan before apply to review changes!
Q: What is a Terraform workspace?
A: A workspace is a named state container. Each workspace has its own state file, allowing you to manage multiple environments (dev, staging, prod) with the same Terraform code but separate infrastructure.
Q: Can I use the same Terraform code for multiple environments?
A: Yes! That’s one of Terraform’s strengths. Use:
- Different
.tfvarsfiles per environment - Different workspaces per environment
- Different backend configurations per environment
This keeps your code DRY (Don’t Repeat Yourself).
Q: What happens if I delete a resource manually in AWS Console?
A: Terraform will detect the resource is missing on the next plan or apply and try to recreate it. To sync Terraform state with reality, use terraform apply -refresh-only or terraform import.
Q: How do I update Terraform provider versions?
A: Update the version constraint in versions.tf:
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0" # Update this
}
}Then run terraform init -upgrade to download the new version.
☁️ AWS Questions
Q: Why is my S3 bucket private? Can’t I make it public?
A: This project uses a private S3 bucket with CloudFront for security and performance:
- ✅ Security: Private buckets are more secure
- ✅ Performance: CloudFront CDN is faster than direct S3 access
- ✅ HTTPS: CloudFront provides free SSL certificates
You can make S3 public, but it’s not recommended for production.
Q: What is Origin Access Control (OAC)?
A: OAC is AWS’s modern way to securely connect CloudFront to private S3 buckets. It replaces the older OAI (Origin Access Identity). OAC ensures only CloudFront can access your S3 bucket, keeping it private while allowing CloudFront to serve content.
Q: How long does CloudFront take to deploy?
A: CloudFront distributions typically take 15-20 minutes to fully deploy globally. The URL is available immediately, but it may show “Distribution not ready” until deployment completes.
Q: Why do I need DynamoDB for Terraform state?
A: DynamoDB provides state locking, which prevents multiple people (or CI/CD runs) from modifying infrastructure simultaneously. This prevents conflicts and corruption of your Terraform state.
Q: Can I use a custom domain with CloudFront?
A: Yes! You need to:
- Request an SSL certificate in AWS Certificate Manager (ACM)
- Add the certificate ARN to your CloudFront configuration
- Add your domain as an alias in CloudFront
- Create a CNAME record in your DNS pointing to CloudFront
🔐 Security Questions
Q: Is it safe to store Terraform state in S3?
A: Yes, if you:
- ✅ Enable encryption at rest
- ✅ Enable versioning
- ✅ Use bucket policies to restrict access
- ✅ Never commit state files to version control
State files may contain sensitive data, so always encrypt them.
Q: How does OIDC authentication work?
A: OIDC (OpenID Connect) allows GitHub Actions to request temporary AWS credentials without storing AWS keys:
- GitHub generates a JWT token with repository information
- AWS validates the token against the OIDC provider
- AWS issues temporary credentials (valid for 1 hour)
- GitHub Actions uses these credentials to deploy
This is more secure than storing long-lived AWS access keys.
Q: Should I use the same AWS account for dev, staging, and prod?
A: For production, it’s best practice to use separate AWS accounts:
- ✅ Better security isolation
- ✅ Prevents accidental production changes
- ✅ Easier compliance and auditing
For learning or small projects, separate accounts per environment is fine.
🚀 Deployment Questions
Q: How do I update my static site after deployment?
A: Upload new files:
# Local deployment
./scripts/upload.sh dev ./build # Or manually
aws s3 sync ./build s3://your-bucket/ --delete
aws cloudfront create-invalidation --distribution-id YOUR_DIST_ID --paths "/*"Or push to GitHub if CI/CD is configured – it will deploy automatically!
Q: Why do I need to invalidate CloudFront cache?
A: CloudFront caches files at edge locations. After uploading new files to S3, you need to invalidate the cache so users get the latest version. Without invalidation, users might see old cached content.
Q: Can I deploy to multiple regions?
A: CloudFront is global, so you don’t need multiple regions. However, if you want S3 buckets in multiple regions, you can:
- Create separate Terraform configurations per region
- Use Terraform workspaces with region-specific backends
- Use CloudFront with multiple origins
Q: How do I rollback a deployment?
A: Several options:
- Upload previous version: Upload the previous build files to S3
- Terraform rollback: Use
terraform statecommands or restore from state backup - S3 versioning: If enabled, restore previous object versions
🛠️ Troubleshooting Questions
Q: Terraform says “state is locked” – what do I do?
A: Another operation is running. Options:
- Wait for the other operation to complete
- If stuck, manually unlock:
terraform force-unlock LOCK_ID - For remote state, check DynamoDB table for lock entries
Warning: Only unlock if you’re sure no operation is running!
Q: My CloudFront URL shows “Access Denied” – why?
A: Common causes:
- CloudFront distribution not fully deployed (wait 15-20 minutes)
- OAC not properly configured
- S3 bucket policy missing or incorrect
- No files uploaded to S3 yet
Q: How do I see what Terraform will change before applying?
A: Always use terraform plan:
./scripts/plan.sh dev # Shows planned changesThe plan shows what will be created (+), modified (~), or destroyed (-).
Q: Can I destroy everything and start over?
A: Yes, but be careful:
# Destroy specific environment
./scripts/destroy.sh dev # This removes ALL resources in that environmentWarning: This is irreversible! Make sure you have backups if needed.
💡 Tips & Tricks
Q: How can I speed up Terraform operations?
A:
- Use
-parallelismflag to run operations in parallel - Cache Terraform providers between runs
- Use targeted operations:
terraform apply -target=resource - Avoid unnecessary
terraform apply -refresh-onlycalls
Q: How do I share Terraform state with my team?
A: Use remote state (S3 backend):
- Configure
backend/*.hclfiles - Use DynamoDB for state locking
- All team members use the same backend
- State is automatically shared and locked
Q: Can I use this for non-WebGL static sites?
A: Absolutely! This works for any static site:
- React, Vue, Angular apps
- Jekyll, Hugo, Gatsby sites
- Plain HTML/CSS/JS
- Documentation sites
- Any static content
Q: How do I add more AWS services?
A: Add resources to your Terraform configuration:
- Create new
.tffiles or add to existing ones - Use Terraform modules for reusable components
- Update variables and outputs as needed
- Test in dev environment first
18. Command Reference
🔧 Terraform Commands
Basic Commands
| Command | Description |
|---|---|
terraform init | Initialize Terraform, download providers |
terraform init -backend-config=backend/dev.hcl | Initialize with remote backend configuration |
terraform fmt | Format all .tf files in current directory |
terraform fmt -recursive | Format all .tf files recursively |
terraform validate | Validate Terraform configuration syntax |
terraform show | Show current state in human-readable format |
terraform state list | List all resources in current state |
terraform state show resource | Show details of specific resource |
Workspace Commands
| Command | Description |
|---|---|
terraform workspace list | List all available workspaces |
terraform workspace show | Display current workspace name |
terraform workspace select dev | Switch to specified workspace |
terraform workspace new staging | Create new workspace |
terraform workspace delete staging | Delete workspace (must destroy resources first) |
Plan & Apply Commands
| Command | Description |
|---|---|
terraform plan | Create execution plan (preview changes) |
terraform plan -var-file=environments/dev/terraform.tfvars | Plan with environment-specific variables |
terraform plan -out=tfplan.out | Save plan to file for later use |
terraform apply | Apply changes (will prompt for confirmation) |
terraform apply -var-file=environments/dev/terraform.tfvars | Apply with environment-specific variables |
terraform apply tfplan.out | Apply previously saved plan |
terraform apply -auto-approve | Apply without confirmation prompt |
terraform destroy | Destroy all resources in state |
terraform destroy -var-file=environments/dev/terraform.tfvars | Destroy with environment-specific variables |
terraform destroy -target=resource | Destroy only specific resource |
Output Commands
| Command | Description |
|---|---|
terraform output | Display all output values |
terraform output deployment_url | Display specific output value |
terraform output -raw s3_bucket_id | Display raw output value (no quotes) |
terraform output -json | Display outputs in JSON format |
State Management
| Command | Description |
|---|---|
terraform state list | List all resources in current state |
terraform state show aws_s3_bucket.main | Show detailed information about specific resource |
terraform state rm aws_s3_bucket.main | Remove resource from state (doesn’t destroy it in AWS) |
terraform import aws_s3_bucket.main bucket-name | Import existing AWS resource into Terraform state |
terraform state mv old_address new_address | Move resource to different address in state |
terraform force-unlock LOCK_ID | Force unlock state (use with caution) |
☁️ AWS CLI Commands
Configuration
| Command | Description |
|---|---|
aws configure | Configure AWS CLI with default profile |
aws configure --profile deploy-config | Configure named AWS profile |
aws configure list-profiles | List all configured AWS profiles |
export AWS_PROFILE=deploy-config | Set default AWS profile for current session |
aws sts get-caller-identity | Verify AWS credentials and get account info |
S3 Commands
| Command | Description |
|---|---|
aws s3 ls | List all S3 buckets |
aws s3 ls s3://bucket-name/ | List objects in bucket |
aws s3 ls s3://bucket-name/ --recursive | List all objects recursively |
aws s3 cp file.txt s3://bucket-name/ | Upload single file to S3 |
aws s3 sync ./build s3://bucket-name/ | Upload directory to S3 (sync) |
aws s3 sync ./build s3://bucket-name/ --delete | Sync directory and delete removed files |
aws s3 cp s3://bucket-name/file.txt ./ | Download file from S3 |
aws s3 sync s3://bucket-name/ ./download/ | Download directory from S3 |
aws s3 rm s3://bucket-name/file.txt | Delete file from S3 |
aws s3 rm s3://bucket-name/ --recursive | Delete all files in bucket |
aws s3 mb s3://bucket-name | Create new S3 bucket |
aws s3 rb s3://bucket-name | Delete empty S3 bucket |
aws s3 rb s3://bucket-name --force | Delete bucket and all contents |
CloudFront Commands
| Command | Description |
|---|---|
aws cloudfront list-distributions | List all CloudFront distributions |
aws cloudfront get-distribution --id DISTRIBUTION_ID | Get detailed information about distribution |
aws cloudfront create-invalidation --distribution-id DIST_ID --paths "/*" | Invalidate CloudFront cache for specified paths |
aws cloudfront list-invalidations --distribution-id DIST_ID | List all invalidations for distribution |
aws cloudfront get-invalidation --distribution-id DIST_ID --id INV_ID | Get status of specific invalidation |
IAM Commands
| Command | Description |
|---|---|
aws sts get-caller-identity | Get current AWS user/role identity |
aws iam list-users | List all IAM users |
aws iam list-roles | List all IAM roles |
aws iam get-role --role-name role-name | Get details of specific IAM role |
aws iam list-policies | List all IAM policies |
aws iam attach-role-policy --role-name role-name --policy-arn POLICY_ARN | Attach policy to IAM role |
💻 Shell/Bash Commands
Environment Variables
| Command | Description |
|---|---|
export AWS_PROFILE=deploy-config | Set AWS profile for current session |
export AWS_DEFAULT_REGION=us-east-1 | Set default AWS region |
export AWS_ACCESS_KEY_ID=your-key | Set AWS access key (not recommended, use profiles) |
export AWS_SECRET_ACCESS_KEY=your-secret | Set AWS secret key (not recommended, use profiles) |
echo $AWS_PROFILE | Display value of environment variable |
env | grep AWS | List all AWS-related environment variables |
File Operations
| Command | Description |
|---|---|
cd /path/to/directory | Change to specified directory |
cd .. | Go up one directory level |
cd ~ | Go to home directory |
ls | List files in current directory |
ls -la | List files with details and hidden files |
ls -lh | List files with human-readable sizes |
mkdir directory-name | Create new directory |
mkdir -p path/to/directory | Create directory and parent directories |
rm file.txt | Remove file |
rm -r directory/ | Remove directory recursively |
rm -rf directory/ | Force remove directory |
cp source.txt dest.txt | Copy file |
cp -r source/ dest/ | Copy directory recursively |
mv old.txt new.txt | Move or rename file |
mv file.txt directory/ | Move file to directory |
Command Chaining & Variables
| Command | Description |
|---|---|
command1 && command2 | Run command2 only if command1 succeeds |
command1 || command2 | Run command2 only if command1 fails |
command1 ; command2 | Run both commands regardless of result |
BUCKET=$(terraform output -raw s3_bucket_id) | Store command output in variable |
echo $BUCKET | Display variable value |
aws s3 ls s3://$BUCKET/ | Use variable in command |
if [ $? -eq 0 ]; then echo "Success"; fi | Check if previous command succeeded |
Script Execution
| Command | Description |
|---|---|
chmod +x script.sh | Make script executable |
./script.sh | Run executable script |
bash script.sh | Run script with bash interpreter |
sh script.sh | Run script with sh interpreter |
./script.sh dev ./build | Run script with arguments |
bash -n script.sh | Check script syntax without executing |
Useful Utilities
| Command | Description |
|---|---|
cat file.txt | Display entire file contents |
less file.txt | View file with scrollable interface |
head -n 20 file.txt | Display first 20 lines of file |
tail -n 20 file.txt | Display last 20 lines of file |
grep "pattern" file.txt | Search for pattern in file |
grep -r "pattern" directory/ | Search for pattern recursively in directory |
find . -name "*.tf" | Find files matching pattern |
find . -type f -name "*.sh" | Find files by type and name |
df -h | Display disk space usage |
ls -l file.txt | Display file permissions and details |
history | Display command history |
history | grep terraform | Search command history for pattern |
📜 Helper Scripts (Optional Shortcuts)
| Command | Description |
|---|---|
./scripts/init.sh | Initialize Terraform and setup workspace |
./scripts/plan.sh dev | Create execution plan for dev environment |
./scripts/plan.sh staging | Create execution plan for staging environment |
./scripts/plan.sh prod | Create execution plan for production environment |
./scripts/apply.sh dev | Apply infrastructure changes to dev |
./scripts/apply.sh staging | Apply infrastructure changes to staging |
./scripts/apply.sh prod | Apply infrastructure changes to production |
./scripts/upload.sh dev ./build | Upload files to dev S3 bucket and invalidate cache |
./scripts/upload.sh staging ./build | Upload files to staging S3 bucket and invalidate cache |
./scripts/upload.sh prod ./build | Upload files to production S3 bucket and invalidate cache |
./scripts/outputs.sh dev | Display Terraform outputs for dev environment |
./scripts/outputs.sh staging | Display Terraform outputs for staging environment |
./scripts/outputs.sh prod | Display Terraform outputs for production environment |
./scripts/destroy.sh dev | Destroy all resources in dev environment |
./scripts/destroy.sh staging | Destroy all resources in staging environment |
./scripts/destroy.sh prod | Destroy all resources in production environment |
./scripts/setup-remote-state.sh staging | Setup S3 and DynamoDB for remote state (staging) |
./scripts/setup-remote-state.sh prod | Setup S3 and DynamoDB for remote state (production) |
./scripts/setup-iam-oidc.sh staging $AWS_ACCOUNT_ID repo | Create IAM role for GitHub Actions OIDC (staging) |
./scripts/setup-iam-oidc.sh prod $AWS_ACCOUNT_ID repo | Create IAM role for GitHub Actions OIDC (production) |
🔀 Git Commands
| Command | Description |
|---|---|
git clone https://github.com/m-saad-siddique/static-site-IAC-deploy.git | Clone repository from GitHub |
cd static-site-IAC-deploy | Navigate to cloned repository |
git status | Check status of working directory |
git add file.txt | Stage specific file for commit |
git add . | Stage all changes for commit |
git commit -m "Message" | Commit staged changes with message |
git push origin branch-name | Push commits to remote repository |
git checkout -b feature-name | Create and switch to new branch |
git switch -c feature-name | Create and switch to new branch (alternative) |
git checkout branch-name | Switch to existing branch |
git switch branch-name | Switch to existing branch (alternative) |
git merge branch-name | Merge branch into current branch |
git pull origin branch-name | Pull latest changes from remote |
git log | View commit history |
git log --oneline | View compact commit history |
19. 📚 Additional Resources
Official Documentation
- Terraform Documentation – Complete Terraform guide
- AWS Provider Documentation – All AWS resources
Learning Resources
- HashiCorp Learn – Free Terraform tutorials
- Terraform Intro – Getting started guide
- AWS Getting Started – AWS basics
Project Documentation
README.md– Main project documentationWORKSPACES_GUIDE.md– Workspace management guideGITHUB_ACTIONS_SETUP.md– CI/CD setup instructionsIAM_ROLE_GUIDE.md– IAM roles and policies explainedAWS_PROFILE_SETUP.md– Local AWS configuration


https://shorturl.fm/niep2
https://shorturl.fm/SyuZn
https://shorturl.fm/vWT1F
https://shorturl.fm/r21HS
https://shorturl.fm/G4xBK
https://shorturl.fm/2XliK
https://shorturl.fm/Q2Zl8
https://shorturl.fm/znWE2
https://shorturl.fm/PFChP
https://shorturl.fm/xIxNG
https://shorturl.fm/okzij
https://shorturl.fm/YK9vR
https://shorturl.fm/x3wPl
https://shorturl.fm/Adp9C
https://shorturl.fm/mTwY4
https://shorturl.fm/FelVk
https://shorturl.fm/PWT2N
https://shorturl.fm/aMgYp
https://shorturl.fm/7TwFN
https://shorturl.fm/svWzj
https://shorturl.fm/MOh2N
https://shorturl.fm/H9hOz
https://shorturl.fm/CfxV6
That analysis was spot on! Seeing platforms like ninong gaming slot really cater to the local market with options like GCash is huge for accessibility. It’s exciting to see this growth in PH esports! 🔥
https://shorturl.fm/9njRg