PRODUCTION-GRADE IMPLEMENTATION

Implementation Walkthrough

Real-world deployment of containerized Node.js application on AWS ECS Fargate with automated CI/CD, infrastructure as code, and production-ready security practices.

100% Automated
3 Environments
Zero Downtime
HTTPS Secured
1

CI/CD Pipeline with GitHub Actions

Fully automated deployment pipeline that builds, tests, and deploys the application with environment-specific configuration and secrets management.

🔄
Auto-Deploy (Dev)
Automatic deployment to dev on push to main branch
🎯
Manual Promotion
Staging/Prod require manual workflow_dispatch for controlled releases
🔐
Secrets Management
AWS credentials stored securely in GitHub Secrets
⚙️
Config Variables
Infrastructure parameters via GitHub environment variables
✓ SUCCESS GitHub Actions Jobs
Three-stage pipeline: Build → Push to ECR → Deploy with CDKTF
GitHub Actions Jobs
DOCKER Build & Push to ECR
Multi-stage Docker build with ECR repository auto-creation
Docker Build and ECR Push
MANUAL Staging & Prod Deployment (workflow_dispatch)
Controlled promotion using GitHub Actions manual trigger with environment input
GitHub Actions Manual Staging and Prod Deploy
💡 ECR Lifecycle Management Strategy
The ECR repository is intentionally managed by the CI pipeline (not Terraform) to prevent accidental deletion when the repository contains images. This design decision solves the RepositoryNotEmptyException that would occur if Terraform tried to destroy a non-empty ECR repository.
  • CI creates repository on-demand if it doesn't exist
  • Terraform state does not track ECR (decoupled lifecycle)
  • Images remain safe during infrastructure updates/rebuilds
CDKTF Infrastructure Deployment
Terraform deployment via CDKTF with CloudWatch logging
Terraform and CloudWatch
🧱 Remote Terraform Backend (S3 + DynamoDB)
Terraform state is stored remotely to support collaboration, history, and safe concurrent usage:
  • S3 bucket: stores the terraform.tfstate files per environment
  • DynamoDB table: provides state locking to prevent concurrent cdktf deploy runs from corrupting the state
  • Backend config: wired through GitHub repository variables (TF_STATE_BUCKET, TF_LOCK_TABLE)
S3 BACKEND Remote Terraform State in S3
Centralized state storage with per-environment isolation
AWS Remote Terraform Backend S3
LOCKING DynamoDB State Lock Table
Prevents concurrent Terraform operations on the same state
AWS Remote Terraform Backend DynamoDB
🔒 SECRETS GitHub Secrets
AWS credentials stored securely per environment
GitHub Secrets
⚙️ CONFIG Environment Variables
Infrastructure parameters configured per environment
GitHub Variables
Variable Injection in Action

Complete variable injection during deployment showing all environment-specific configuration

2

IAM Configuration & Security

Least-privilege IAM setup with dedicated CI/CD user, grouped permissions, and role-based access control for ECS tasks.

👤
Dedicated CI User
Separate IAM user for GitHub Actions with programmatic access
👥
Group-Based Access
Permissions managed through IAM groups for easier maintenance
🎭
Task Roles
Separate execution and application roles for ECS tasks
🔒
Least Privilege
Minimum required permissions for each component
IAM USER CI/CD User Overview
Dedicated user for GitHub Actions pipeline
AWS IAM User
DETAILS User Configuration
User summary with access keys and group membership
AWS User Summary
GROUP Group Membership
User assigned to turbovets-github-actions-group
User in Group
PERMISSIONS Group Policies
Comprehensive permissions for CI/CD operations
Group Permissions
Complete User Permissions
TurboVetsDevOpsCiACMRoute53Policy policy

Full permission breakdown showing inherited group policies and dedicated Route53/ACM policy.

🔐 IAM Best Practices Implemented
  • Dedicated CI/CD user: Separate from human users for audit trail
  • Group-based permissions: Easier to manage and update policies
  • No inline policies: All permissions managed through groups and managed policies
  • Access key rotation: Credentials can be rotated without code changes
  • Task execution role: For ECS infrastructure operations (ECR pull, CloudWatch logs)
  • Task application role: Empty by default, extended only when app needs AWS APIs
3

AWS Infrastructure Deployment

Multi-AZ VPC with ECS Fargate cluster, Application Load Balancer, and production-ready networking configuration deployed via CDKTF.

Multi-AZ VPC
2 subnets for high availability, and fault tolerance across two availability zones.
Application Load Balancer
HTTPS with ACM certificate and HTTP→HTTPS redirect.
ECS Fargate
Serverless container orchestration with auto-scaling capability.
Security Groups
Least-privilege network access control between ALB and tasks.
ACTIVE ECS Cluster Overview
turbovets-app-dev-cluster with running service
ECS Clusters
SERVICE ECS Service Summary
Fargate service configuration and task details
ECS Service Summary
✓ HEALTHY Health Status
Target group health checks passing
ECS Health Status
NETWORKING Network Configuration
VPC, subnets, and security group assignments
ECS Network Configuration
📌 Infrastructure Highlights
  • All core resources (VPC, ALB, ECS, security groups) are provisioned via CDK for Terraform.
  • Traffic enters through Route 53 → ALB (HTTPS) → ECS Fargate tasks.
  • Observability via CloudWatch Logs for each ECS task family and environment.
  • Networking and IAM are aligned with AWS Well-Architected best practices for small, production-grade workloads.