How to Deploy AWS ECS with Terraform (Step-by-Step Guide)

Terraform AWS ECS Infrastructure: Complete Deployment Guide

🚀 Terraform AWS ECS Infrastructure

Complete Infrastructure as Code Tutorial Guide

📦 View on GitHub

📚 Terraform Overview

This guide will help you deploy the complete AWS ECS infrastructure using Terraform (Infrastructure as Code). Terraform allows you to define, provision, and manage AWS resources using declarative configuration files.

Best for: Developers and DevOps engineers who want to automate infrastructure provisioning and manage it as code

📦 Repository: https://github.com/m-saad-siddique/terraform-aws-ecs-infrastructure

What is Terraform?

📖 Understanding Infrastructure as Code

Terraform is an open-source Infrastructure as Code (IaC) tool that enables you to safely and predictably create, change, and improve infrastructure. Instead of manually clicking through the AWS Console, you write configuration files that describe your desired infrastructure state.

Key Benefits:

  • Version Control: Track infrastructure changes in Git
  • Reproducibility: Deploy identical infrastructure across environments
  • Automation: Provision entire infrastructure with a single command
  • Collaboration: Team members can review and contribute to infrastructure changes
  • Safety: Preview changes before applying them

Prerequisites

1

Required Tools

Install these before starting

📦 Installation Requirements

ToolVersionInstallation
Terraform>= 1.5.0Download
AWS CLI>= 2.0Download
GitLatestDownload

Install Terraform

1

macOS (using Homebrew):

brew install terraform
2

Linux:

$wget https://releases.hashicorp.com/terraform/1.5.7/terraform_1.5.7_linux_amd64.zip unzip terraform_1.5.7_linux_amd64.zip sudo mv terraform /usr/local/bin/
3

Verify Installation:

terraform version

You should see: Terraform v1.5.7 or higher

2

Configure AWS Credentials

Set up AWS access

Step 1: Configure AWS Profile

1

Configure AWS Profile:

Terraform is configured to use the temp_resources AWS profile. Configure it using:

aws configure –profile temp_resources

You’ll be prompted for:

  • AWS Access Key ID: Your access key
  • AWS Secret Access Key: Your secret key
  • Default region: us-east-1
  • Default output format: json
2

Verify AWS Profile:

# Check profile configuration aws configure list –profile temp_resources # Verify profile credentials and account aws sts get-caller-identity –profile temp_resources

This should return your AWS account ID, user ARN, and user ID.

💡 Profile Configuration: The Terraform configuration in main.tf is set to use the temp_resources profile automatically. You don’t need to set environment variables.

Step 2: Verify Terraform is Using the Correct Profile

1

After Terraform Init, Verify Profile Usage:

You can verify Terraform is using the correct profile in several ways:

# Method 1: Check Terraform plan output terraform plan | head -20 # Look for account ID or region information # Method 2: Use AWS CLI to compare # Get account from Terraform (after init) terraform console # Type: data.aws_caller_identity.current.account_id # Exit with: exit # Method 3: Check in AWS Console # After running terraform plan, check CloudTrail or Resource Groups # to see which account resources are being created in
2

Quick Verification Script:

# Compare AWS profile account with Terraform account PROFILE_ACCOUNT=$(aws sts get-caller-identity –profile temp_resources –query Account –output text) echo “Profile Account: $PROFILE_ACCOUNT” # After terraform init, check Terraform’s account terraform init TERRAFORM_ACCOUNT=$(terraform console -var-file=terraform.tfvars <<< 'data.aws_caller_identity.current.account_id' | tr -d '"') echo "Terraform Account: $TERRAFORM_ACCOUNT" # They should match! if [ "$PROFILE_ACCOUNT" == "$TERRAFORM_ACCOUNT" ]; then echo "✅ Terraform is using the correct profile!" else echo "❌ Profile mismatch detected!" fi
⚠️ Important: Make sure the temp_resources profile has the necessary AWS permissions to create and manage all resources (VPC, ECS, RDS, ALB, etc.).
3

Project Structure

Understanding the Terraform directory

📁 Terraform Directory Structure

terraform/ ├── main.tf # Main configuration file ├── variables.tf # Variable definitions ├── outputs.tf # Output values ├── versions.tf # Provider versions ├── terraform.tfvars.example # Example variables file ├── terraform.tfvars # Your actual variables (not in git) └── modules/ # Reusable modules ├── vpc/ # VPC module ├── iam/ # IAM roles module ├── ecr/ # ECR repositories module ├── rds/ # RDS database module ├── ecs/ # ECS cluster module ├── alb/ # Application Load Balancer module ├── cloudwatch/ # CloudWatch logs module ├── ssm/ # Parameter Store module └── ecs-services/ # ECS services module
📦 Source Code: This Terraform configuration is available on GitHub: https://github.com/m-saad-siddique/terraform-aws-ecs-infrastructure
4

Configuration Setup

Configure your variables

Step 1: Copy Example Variables File

1

Navigate to Terraform Directory:

cd terraform
2

Copy Example File:

cp terraform.tfvars.example terraform.tfvars

This creates your local variables file (already in .gitignore).

Step 2: Configure Required Variables

Open terraform.tfvars and update the following REQUIRED values:

# AWS Configuration aws_region = “us-east-1” # Project Configuration project_name = “ecs-production” environment = “production” # VPC Configuration vpc_cidr = “10.0.0.0/16” availability_zones = [“us-east-1a”, “us-east-1b”] # RDS Configuration – ⚠️ CHANGE THESE! db_instance_class = “db.t3.micro” db_allocated_storage = 20 db_name = “fileanalyzer” db_username = “admin” db_password = “YourSecurePassword123!” # ⚠️ CHANGE THIS! # ECR Configuration frontend_ecr_repository = “frontend-repo” backend_ecr_repository = “backend-repo” # ECS Task Configuration frontend_cpu = 512 frontend_memory = 1024 backend_cpu = 1024 backend_memory = 2048 # ECS Service Configuration frontend_desired_count = 2 backend_desired_count = 2 # Docker Image Tags frontend_image_tag = “latest” backend_image_tag = “latest” # Common Tags tags = { Project = “ECS-Infrastructure” ManagedBy = “Terraform” Environment = “production” Owner = “DevOps” }
⚠️ Security Warning:
  • Never commit terraform.tfvars to version control
  • Use strong passwords for database
  • Consider using AWS Secrets Manager for sensitive values in production
5

Initialize Terraform

Download providers and modules

Step 1: Initialize Terraform

1

Run Terraform Init:

terraform init

This downloads required providers and modules.

2

Expected Output:

  • ✓ Initializing provider plugins…
  • ✓ Finding latest version of hashicorp/aws…
  • ✓ Installing hashicorp/aws…
  • ✓ Terraform has been successfully initialized!

Step 2: Validate Configuration

1

Validate Terraform Files:

terraform validate

This checks your Terraform files for syntax errors.

Step 3: Review Execution Plan

1

Generate Plan:

terraform plan

This shows what resources will be created without actually creating them.

Review the plan carefully:
  • Check resource counts (should create ~50+ resources)
  • Verify resource names match your expectations
  • Confirm region and availability zones
  • Note any warnings or errors
⚠️ Important: Ensure you have the following AWS permissions:
  • EC2 (VPC, Subnets, Security Groups, NAT Gateways)
  • IAM (Roles and Policies)
  • ECR (Repositories)
  • RDS (Database instances)
  • ECS (Clusters, Services, Task Definitions)
  • ELB (Load Balancers, Target Groups)
  • CloudWatch (Log Groups)
  • SSM (Parameter Store)

📦 Terraform Modules Overview

This section explains each Terraform module and what resources it creates. Modules are reusable components that encapsulate related resources.

Best for: Understanding the infrastructure components and how they’re organized in Terraform

Module Architecture

VPC Module
Networking Foundation
IAM Module
Roles & Permissions
ECR Module
Docker Repositories
RDS Module
Database
ECS Module
Cluster
ALB Module
Load Balancer
CloudWatch Module
Log Groups
SSM Module
Parameters
ECS Services Module
Task Definitions & Services
1

VPC Module

Networking Foundation

📖 Module Overview

Location: modules/vpc/

Purpose: Creates the foundational networking infrastructure for all other resources.

Resources Created:

  • VPC with DNS support
  • 2 Public Subnets (one per AZ)
  • 2 Private Subnets (one per AZ)
  • Internet Gateway
  • 2 NAT Gateways (one per AZ)
  • Elastic IPs for NAT Gateways
  • Route Tables (Public and Private)
  • Route Table Associations
  • Security Group for ALB
  • Security Group for ECS Tasks
  • Security Group for RDS

📋 Module Configuration

The VPC module is configured in main.tf:

module “vpc” { source = “./modules/vpc” project_name = var.project_name vpc_cidr = var.vpc_cidr availability_zones = var.availability_zones tags = var.tags }
💡 Cost Optimization: NAT Gateways cost ~$0.045/hour each. For development, consider using a single NAT Gateway in one AZ to reduce costs.
2

IAM Module

Roles & Permissions

📖 Module Overview

Location: modules/iam/

Purpose: Creates IAM roles with appropriate permissions for ECS tasks.

Resources Created:

  • ECS Task Execution Role
  • ECS Task Execution Role Policy (managed)
  • ECS Task Execution SSM Policy (custom)
  • Frontend Task Role
  • Backend Task Role

📋 Module Configuration

module “iam” { source = “./modules/iam” project_name = var.project_name aws_region = var.aws_region tags = var.tags }
3

ECR Module

Docker Image Storage

📖 Module Overview

Location: modules/ecr/

Purpose: Creates Docker image repositories for frontend and backend.

Resources Created:

  • Frontend ECR Repository
  • Backend ECR Repository
  • Lifecycle Policy (keeps last 10 images)
  • Image Scanning Configuration
  • Encryption Configuration

📋 Module Configuration

module “ecr” { source = “./modules/ecr” frontend_repository_name = var.frontend_ecr_repository backend_repository_name = var.backend_ecr_repository tags = var.tags }
4

RDS Module

Managed Database

📖 Module Overview

Location: modules/rds/

Purpose: Creates a PostgreSQL database for the application.

Resources Created:

  • DB Subnet Group
  • RDS PostgreSQL Instance
  • Automated Backups (7 days retention)
  • Encryption at Rest

📋 Module Configuration

module “rds” { source = “./modules/rds” project_name = var.project_name private_subnet_ids = module.vpc.private_subnet_ids security_group_id = module.vpc.rds_security_group_id instance_class = var.db_instance_class allocated_storage = var.db_allocated_storage db_name = var.db_name db_username = var.db_username db_password = var.db_password tags = var.tags }
5

ECS Module

Container Orchestration

📖 Module Overview

Location: modules/ecs/

Purpose: Creates the ECS cluster for running containers.

Resources Created:

  • ECS Cluster
  • Container Insights (enabled)
  • Fargate Capacity Providers

📋 Module Configuration

module “ecs” { source = “./modules/ecs” project_name = var.project_name vpc_id = module.vpc.vpc_id tags = var.tags }
6

ALB Module

Load Balancer

📖 Module Overview

Location: modules/alb/

Purpose: Creates load balancer for distributing traffic to ECS services.

Resources Created:

  • Application Load Balancer
  • Frontend Target Group (port 80)
  • Backend Target Group (port 3001)
  • HTTP Listener (port 80)
  • HTTPS Listener (port 443, if certificate provided)
  • Listener Rules for routing

📋 Module Configuration

module “alb” { source = “./modules/alb” project_name = var.project_name vpc_id = module.vpc.vpc_id public_subnet_ids = module.vpc.public_subnet_ids alb_security_group_id = module.vpc.alb_security_group_id certificate_arn = var.certificate_arn tags = var.tags }
7

CloudWatch Module

Logging

📖 Module Overview

Location: modules/cloudwatch/

Purpose: Creates log groups for ECS task logging.

Resources Created:

  • CloudWatch Log Group: /ecs/frontend
  • CloudWatch Log Group: /ecs/backend
  • Log Retention Policy (30 days default)

📋 Module Configuration

module “cloudwatch” { source = “./modules/cloudwatch” project_name = var.project_name log_retention_days = 30 tags = var.tags }
8

SSM Module

Parameter Store

📖 Module Overview

Location: modules/ssm/

Purpose: Stores configuration and secrets for ECS tasks.

Resources Created:

  • /ecs/backend/DB_HOST
  • /ecs/backend/DB_PORT
  • /ecs/backend/DB_NAME
  • /ecs/backend/DB_USER (SecureString)
  • /ecs/backend/DB_PASSWORD (SecureString)
  • /ecs/backend/FRONTEND_URL
  • /ecs/frontend/API_URL

📋 Module Configuration

module “ssm” { source = “./modules/ssm” project_name = var.project_name db_host = module.rds.db_instance_address db_port = module.rds.db_instance_port db_name = module.rds.db_name db_username = var.db_username db_password = var.db_password frontend_url = var.domain_name != “” ? “https://${var.domain_name}” : “http://${module.alb.alb_dns_name}” api_url = var.domain_name != “” ? “https://${var.domain_name}/api” : “http://${module.alb.alb_dns_name}/api” tags = var.tags depends_on = [ module.rds, module.alb ] }
9

ECS Services Module

Task Definitions & Services

📖 Module Overview

Location: modules/ecs-services/

Purpose: Creates task definitions and ECS services for frontend and backend.

Resources Created:

  • Frontend Task Definition
  • Backend Task Definition
  • Frontend ECS Service
  • Backend ECS Service
  • Auto Scaling Targets and Policies

📋 Module Configuration

module “ecs_services” { source = “./modules/ecs-services” project_name = var.project_name ecs_cluster_id = module.ecs.cluster_id ecs_cluster_name = module.ecs.cluster_name private_subnet_ids = module.vpc.private_subnet_ids ecs_security_group_id = module.vpc.ecs_security_group_id ecs_task_execution_role_arn = module.iam.ecs_task_execution_role_arn frontend_task_role_arn = module.iam.frontend_task_role_arn backend_task_role_arn = module.iam.backend_task_role_arn frontend_ecr_repository_url = module.ecr.frontend_repository_url backend_ecr_repository_url = module.ecr.backend_repository_url frontend_image_tag = var.frontend_image_tag backend_image_tag = var.backend_image_tag frontend_target_group_arn = module.alb.frontend_target_group_arn backend_target_group_arn = module.alb.backend_target_group_arn frontend_desired_count = var.frontend_desired_count backend_desired_count = var.backend_desired_count frontend_cpu = var.frontend_cpu frontend_memory = var.frontend_memory backend_cpu = var.backend_cpu backend_memory = var.backend_memory tags = var.tags depends_on = [ module.cloudwatch, module.ssm ] }

⚙️ Deployment Guide

This section provides step-by-step instructions for deploying your infrastructure using Terraform.

Prerequisites: Complete Tab 1 (Getting Started) before proceeding.

Deployment Steps

1

Full Deployment

Deploy all infrastructure

Step 1: Apply Terraform Configuration

1

Deploy Infrastructure:

terraform apply

Terraform will:

  1. Show you a plan of all resources to be created
  2. Ask for confirmation (type yes)
  3. Create all resources in the correct order
  4. Show outputs with important information
⏱️ Expected Time: 15-20 minutes for full deployment
⚠️ Important: The first deployment will create all resources. Subsequent runs will only update changed resources.
2

Verify Deployment

Check outputs and resources

Step 1: View Terraform Outputs

1

View All Outputs:

terraform output
2

View Specific Outputs:

# Get ALB DNS name terraform output alb_dns_name # Get ECR repository URLs terraform output frontend_ecr_repository_url terraform output backend_ecr_repository_url # Get RDS endpoint terraform output rds_endpoint # Get ECS cluster name terraform output ecs_cluster_name
3

Push Docker Images

Deploy your application images

Step 1: Login to ECR

1

Get ECR Login Token:

aws ecr get-login-password –region us-east-1 | docker login –username AWS –password-stdin $(terraform output -raw frontend_ecr_repository_url | cut -d’/’ -f1)

Step 2: Build and Push Frontend

1

Build and Push:

cd ../frontend docker build -t frontend:latest . docker tag frontend:latest $(terraform output -raw frontend_ecr_repository_url):latest docker push $(terraform output -raw frontend_ecr_repository_url):latest

Step 3: Build and Push Backend

1

Build and Push:

cd ../backend docker build -t backend:latest . docker tag backend:latest $(terraform output -raw backend_ecr_repository_url):latest docker push $(terraform output -raw backend_ecr_repository_url):latest
4

Access Your Application

Test the deployment

Step 1: Get ALB DNS Name

1

Get DNS Name:

terraform output alb_dns_name

Open the URL in your browser. The application should be accessible!

✅ Success! Your infrastructure is now deployed and running.
5

Update Infrastructure

Making changes to deployed resources

Step 1: Modify Configuration

1

Edit Variables:

Modify terraform.tfvars or Terraform files as needed.

2

Review Changes:

terraform plan

Review what will change before applying.

3

Apply Changes:

terraform apply
6

Destroy Infrastructure

Clean up resources
⚠️ Warning: This will delete ALL resources created by Terraform. Use with caution!

Step 1: Destroy Resources

1

Review Destruction Plan:

terraform plan -destroy

Review what will be destroyed.

2

Destroy Infrastructure:

terraform destroy

Type yes to confirm.

📋 CI/CD Overview

This section covers setting up automated deployment using GitHub Actions. Once configured, every push to your main branch will automatically build Docker images, push them to ECR, and update your ECS services.

Prerequisites: All infrastructure resources must be deployed using Terraform (complete Tab 3: Deployment Guide first).

📦 Application Repositories

You need to set up CI/CD for both repositories:

Note: Each repository requires its own GitHub Actions workflow file and secrets configuration. The workflow process is the same for both, but with different repository names, ECR repositories, and ECS services.

CI/CD Pipeline Architecture

GitHub Repository
Code Push
GitHub Actions
Build & Deploy
ECR
Push Image
ECS Service
Update & Deploy
1

GitHub Actions CI/CD Setup

Automated Deployment Pipeline

📖 Understanding GitHub Actions

GitHub Actions is a CI/CD platform that automates your software workflows. When you push code to GitHub, Actions can:

  • Build: Compile your application and create Docker images
  • Test: Run automated tests
  • Deploy: Push images to ECR and update ECS services

Manual deployments are error-prone and time-consuming. CI/CD ensures consistent, automated deployments every time you push code – no more “it works on my machine” issues.

Important: Since your infrastructure is managed by Terraform, the ECR repositories, ECS services, and task definitions are already created. The CI/CD pipeline will only build and deploy application code, not infrastructure changes.

Step 1: Get Terraform Outputs

1

Get ECR Repository URLs:

# Get frontend ECR repository URL terraform output frontend_ecr_repository_url # Get backend ECR repository URL terraform output backend_ecr_repository_url # Get ECS cluster name terraform output ecs_cluster_name

You’ll need these values for the GitHub Actions workflow.

Step 2: Configure GitHub Secrets

1

Navigate to Repository Settings

  • Go to your GitHub repository
  • Click “Settings” tab
  • Click “Secrets and variables” → “Actions”
2

Create AWS Credentials Secret

Click “New repository secret” and add:

  • Name: AWS_ACCESS_KEY_ID
  • Value: Your AWS access key ID
  • Click “Add secret”
⚠️ Security: Never commit AWS credentials to your repository. Always use GitHub Secrets.
3

Add Remaining AWS Secrets

Add these AWS credential secrets (one at a time):

Secret NameDescription
AWS_SECRET_ACCESS_KEYYour AWS secret access key
AWS_REGIONe.g., us-east-1
💡 Note: You don’t need to add ECR_REPOSITORY, ECS_SERVICE, or ECS_CLUSTER as secrets. These values are hardcoded in the workflow file’s env section, which is the recommended approach.

🔄 How the GitHub Actions Workflow Works

The CI/CD pipeline automates the entire deployment process. Here’s how it works step by step:

  1. Trigger: When you push code to the main branch, GitHub Actions automatically detects the change and starts the workflow.
  2. Checkout Code: The workflow checks out your repository code to the GitHub Actions runner (a virtual machine).
  3. Configure AWS Credentials: Uses the AWS credentials stored in GitHub Secrets to authenticate with your AWS account.
  4. Login to ECR: Authenticates Docker with Amazon ECR so it can push images.
  5. Build Docker Image: Builds your application into a Docker image using the Dockerfile in your repository.
  6. Tag Image: Tags the image with the Git commit SHA (unique identifier) for version tracking.
  7. Push to ECR: Uploads the Docker image to your ECR repository (created by Terraform).
  8. Download Task Definition: Retrieves the current ECS task definition from AWS (managed by Terraform).
  9. Update Task Definition: Updates the task definition with the new image URI (points to the newly pushed image).
  10. Deploy to ECS: Updates the ECS service (created by Terraform) with the new task definition, triggering a rolling deployment.
  11. Wait for Stability: Waits for the new tasks to become healthy before completing.

Rolling Deployment Process:

  • ECS starts new tasks with the updated image
  • New tasks register with the target group and pass health checks
  • ALB gradually shifts traffic from old tasks to new tasks
  • Old tasks are stopped once new tasks are healthy
  • This ensures zero-downtime deployments

For Frontend Repository: The workflow builds the Next.js application, pushes to the ECR repository created by Terraform, and updates the ECS service created by Terraform.

For Backend Repository: The workflow builds the Node.js/TypeScript backend, pushes to the ECR repository created by Terraform, and updates the ECS service created by Terraform.

Step 3: Create GitHub Actions Workflow

1

Create Workflow Directory

  • In your repository root, create: .github/workflows/
  • Create file: .github/workflows/deploy.yml
2

Frontend Workflow Example

For the Frontend Repository (github.com/m-saad-siddique/frontend), add this content to deploy.yml:

⚠️ Important: Replace the environment variables with values from your Terraform outputs:

name: Deploy Frontend to ECS on: push: branches: [ main ] env: AWS_REGION: us-east-1 ECR_REPOSITORY: frontend-repo # From terraform output: frontend_ecr_repository_url ECS_SERVICE: ecs-production-frontend-service # Service name from Terraform ECS_CLUSTER: ecs-production-cluster # From terraform output: ecs_cluster_name ECS_TASK_DEFINITION: ecs-production-frontend-task # Task definition family from Terraform CONTAINER_NAME: frontend jobs: deploy: name: Deploy runs-on: ubuntu-latest steps: – name: Checkout uses: actions/checkout@v4 – name: Configure AWS credentials uses: aws-actions/configure-aws-credentials@v4 with: aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} aws-region: ${{ env.AWS_REGION }} – name: Login to Amazon ECR id: login-ecr uses: aws-actions/amazon-ecr-login@v2 – name: Build, tag, and push image to Amazon ECR id: build-image env: ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }} IMAGE_TAG: ${{ github.sha }} run: | docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG . docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG echo “image=$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG” >> $GITHUB_OUTPUT – name: Download task definition run: | aws ecs describe-task-definition \ –task-definition $ECS_TASK_DEFINITION \ –query taskDefinition > task-definition.json – name: Fill in the new image ID in the Amazon ECS task definition id: task-def uses: aws-actions/amazon-ecs-render-task-definition@v1 with: task-definition: task-definition.json container-name: ${{ env.CONTAINER_NAME }} image: ${{ steps.build-image.outputs.image }} – name: Deploy Amazon ECS task definition uses: aws-actions/amazon-ecs-deploy-task-definition@v1 with: task-definition: ${{ steps.task-def.outputs.task-definition }} service: ${{ env.ECS_SERVICE }} cluster: ${{ env.ECS_CLUSTER }} wait-for-service-stability: true
3

Backend Workflow

For the Backend Repository (github.com/m-saad-siddique/backend), use the same workflow structure but update these environment variables:

Environment VariableFrontend ValueBackend Value
ECR_REPOSITORYfrontend-repobackend-repo
ECS_SERVICEecs-production-frontend-serviceecs-production-backend-service
ECS_TASK_DEFINITIONecs-production-frontend-taskecs-production-backend-task
CONTAINER_NAMEfrontendbackend-app
💡 Tip: Get the exact service and task definition names from Terraform outputs:
# List all ECS services aws ecs list-services –cluster $(terraform output -raw ecs_cluster_name) # Get task definition names aws ecs list-task-definitions –family-prefix ecs-production

Step 4: Test the Pipeline

1

Commit and Push

  • Commit the workflow file
  • Push to main branch
2

Monitor Deployment

  • Go to GitHub → “Actions” tab
  • Watch the workflow run
  • Check ECS Console to see service updating

📋 Workflow Execution Summary

When you push code to the Frontend Repository:

  1. GitHub Actions triggers the workflow
  2. Builds Next.js frontend application into Docker image
  3. Tags image with commit SHA (e.g., frontend-repo:abc123)
  4. Pushes image to ECR repository (created by Terraform)
  5. Downloads current task definition (managed by Terraform)
  6. Updates task definition with new image URI
  7. Deploys updated task definition to ECS service (created by Terraform)
  8. ECS performs rolling update (starts new tasks, shifts traffic, stops old tasks)
  9. New frontend version is live on ALB

When you push code to the Backend Repository:

  1. GitHub Actions triggers the workflow
  2. Builds Node.js/TypeScript backend application into Docker image
  3. Tags image with commit SHA (e.g., backend-repo:xyz789)
  4. Pushes image to ECR repository (created by Terraform)
  5. Downloads current task definition (managed by Terraform)
  6. Updates task definition with new image URI
  7. Deploys updated task definition to ECS service (created by Terraform)
  8. ECS performs rolling update (starts new tasks, shifts traffic, stops old tasks)
  9. New backend version is live and accessible via /api/* routes
🔄 Independent Deployments: Frontend and backend deployments are completely independent. You can deploy one without affecting the other. Each repository has its own workflow, ECR repository, and ECS service.
⚠️ Important Note: The CI/CD pipeline only deploys application code. Infrastructure changes (VPC, ALB, RDS, etc.) must be managed through Terraform. If you need to update infrastructure, modify your Terraform files and run terraform apply.

✅ CI/CD Setup Complete!

Your pipeline is now configured. Every push to main will automatically:

  1. Build your Docker image
  2. Push to ECR (repository created by Terraform)
  3. Update ECS service (created by Terraform) with new image
  4. Perform rolling deployment

Next Steps:

  • Push code to main branch in either repository
  • Monitor the deployment in GitHub Actions tab
  • Verify the new version is running in ECS Console
  • Test your application via the ALB DNS name (from Terraform outputs)
2

Optional: Terraform Infrastructure CI/CD

Automate infrastructure deployments

📖 Understanding Terraform CI/CD

You can also set up CI/CD for your Terraform infrastructure repository. This allows you to automatically apply infrastructure changes when you push Terraform code updates.

Use Cases:

  • Automatically apply infrastructure changes on merge to main
  • Run terraform plan on pull requests
  • Validate Terraform code before merging
  • Maintain infrastructure as code in version control

Terraform Repository Workflow Example

Create .github/workflows/terraform.yml in your Terraform repository:

name: Terraform Infrastructure CI/CD on: push: branches: [ main ] paths: – ‘terraform/**’ pull_request: branches: [ main ] paths: – ‘terraform/**’ env: AWS_REGION: us-east-1 TF_VERSION: 1.5.7 jobs: terraform-plan: name: Terraform Plan runs-on: ubuntu-latest if: github.event_name == ‘pull_request’ steps: – name: Checkout uses: actions/checkout@v4 – name: Setup Terraform uses: hashicorp/setup-terraform@v3 with: terraform_version: ${{ env.TF_VERSION }} – name: Configure AWS credentials uses: aws-actions/configure-aws-credentials@v4 with: aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} aws-region: ${{ env.AWS_REGION }} – name: Terraform Init working-directory: ./terraform run: terraform init – name: Terraform Format Check working-directory: ./terraform run: terraform fmt -check – name: Terraform Validate working-directory: ./terraform run: terraform validate – name: Terraform Plan working-directory: ./terraform run: terraform plan -no-color terraform-apply: name: Terraform Apply runs-on: ubuntu-latest if: github.ref == ‘refs/heads/main’ && github.event_name == ‘push’ needs: [] steps: – name: Checkout uses: actions/checkout@v4 – name: Setup Terraform uses: hashicorp/setup-terraform@v3 with: terraform_version: ${{ env.TF_VERSION }} – name: Configure AWS credentials uses: aws-actions/configure-aws-credentials@v4 with: aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} aws-region: ${{ env.AWS_REGION }} – name: Terraform Init working-directory: ./terraform run: terraform init – name: Terraform Plan working-directory: ./terraform run: terraform plan -no-color – name: Terraform Apply working-directory: ./terraform run: terraform apply -auto-approve
⚠️ Security Considerations:
  • Use -auto-approve only if you trust your team and review process
  • Consider requiring manual approval for production environments
  • Use separate AWS credentials with limited permissions for CI/CD
  • Enable branch protection rules on main branch

❓ Advanced Topics & Troubleshooting

This section covers advanced Terraform topics, common issues, and best practices.

Common Issues & Solutions

1

Terraform Errors

Common error solutions

Error: “ResourceNotFoundException: The specified log group does not exist”

Solution: CloudWatch log groups are created automatically by the CloudWatch module. Ensure the module is applied before ECS services. If this error occurs, manually create the log groups:

aws logs create-log-group –log-group-name /ecs/frontend aws logs create-log-group –log-group-name /ecs/backend

Error: “CannotPullContainerError”

Solution:

  • Verify Docker images are pushed to ECR
  • Check ECR repository URLs in task definitions
  • Ensure ECS Task Execution Role has ECR pull permissions
  • Verify image tags match what’s in task definition

Error: “InvalidParameterException: The security group does not allow ingress”

Solution: Security groups are configured in the VPC module. Check that:

  • ALB security group allows traffic from 0.0.0.0/0 on ports 80/443
  • ECS security group allows traffic from ALB security group
  • RDS security group allows traffic from ECS security group on port 5432

Error: “Insufficient capacity”

Solution:

  • Try a different availability zone
  • Reduce desired task count temporarily
  • Wait a few minutes and retry
  • Check AWS Service Health Dashboard

Best Practices

2

State Management

Managing Terraform state

Remote State

Use Remote State: Configure S3 backend in versions.tf for team collaboration:

terraform { backend “s3” { bucket = “your-terraform-state-bucket” key = “ecs-infrastructure/terraform.tfstate” region = “us-east-1” encrypt = true dynamodb_table = “terraform-state-lock” } }

State Locking

Enable State Locking: Use DynamoDB table for state locking to prevent concurrent modifications.

⚠️ Important:
  • Never commit *.tfstate files to version control
  • Regularly backup Terraform state files
  • Use remote state for team collaboration
3

Security Best Practices

Secure your infrastructure

Secrets Management

  • Never Commit Secrets: Keep terraform.tfvars out of version control
  • Use Parameter Store: Store sensitive values in AWS Systems Manager Parameter Store
  • Rotate Credentials: Regularly update database passwords and access keys
  • Least Privilege: IAM roles follow principle of least privilege
  • Enable Encryption: All resources use encryption at rest
4

Cost Optimization

Reduce AWS costs

Cost Saving Tips

  • Right-Size Resources: Start with smaller instances, scale up as needed
  • Use Spot Instances: Consider FARGATE_SPOT for non-production
  • Clean Up Old Images: ECR lifecycle policies automatically clean old images
  • Monitor Costs: Set up AWS Cost Explorer and budgets
  • Delete Unused Resources: Use terraform destroy for test environments

Debugging Commands

5

Useful Terraform Commands

Debugging and inspection

State Inspection

# View all resources in state terraform state list # View specific resource terraform state show module.vpc.aws_vpc.main # Refresh state from AWS terraform refresh # View plan for specific resource terraform plan -target=module.ecs_services

Formatting and Validation

# Format Terraform files terraform fmt # Validate configuration terraform validate # Check for errors terraform validate -check-variables=false

Complete Terraform Guide
Learn more about Rails
Learn more about Mern Stack
Learn more about DevOps
Learn more about AWS ECS Infrastructure guide

1 thought on “How to Deploy AWS ECS with Terraform (Step-by-Step Guide)”

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top