DEV Community

Cover image for Building an Enterprise-Grade AWS CI/CD Pipeline with Terraform
Shivanshu Sharma for AWS Community Builders

Posted on • Edited on • Originally published at Medium

8 4 4 4 4

Building an Enterprise-Grade AWS CI/CD Pipeline with Terraform

How I automated AWS deployments with CodePipeline, CodeCommit, and CodeBuild for under $5/month

The DevOps Challenge

Let me take you on a technical adventure that recently consumed my weekends and late nights. While I’ve mastered the arts of Jenkins, GitHub Actions, and Azure DevOps over the years, AWS’s native CI/CD services remained unexplored territory in my professional journey. When tasked with implementing a fully AWS-native DevOps pipeline for a crucial enterprise SSO project, I knew I was in for both a challenge and a revelation.

Truth be told, I approached this mission with equal parts excitement and skepticism. Would AWS’s homegrown CI/CD solutions match the maturity of the standalone counterparts I’ve grown to love? Spoiler alert: they don’t — at least not yet — but they certainly bring their own flavor of magic to the table.

The goal seemed straightforward at first: automate the deployment of AWS SSO configurations through a fully managed CI/CD pipeline. But as with most interesting DevOps problems, the devil was in the details. I would need to navigate the peculiarities of AWS’s native services, overcome their integration quirks, and piece together a solution that was both robust and maintainable. The journey took me through CodeCommit’s sparse interface, CodeBuild’s container idiosyncrasies, and CodePipeline’s rather opinionated workflow design — all while maintaining Terraform as my infrastructure orchestration tool of choice.

The Mission: Infrastructure as Code Automation

Before diving into code snippets and configuration files, let’s understand what I was building. My task centered around AWS Single Sign-On (SSO) automation for a large enterprise with dozens of AWS accounts and multiple teams requiring different levels of access:

  1. Create a system to provision Permission Sets (essentially IAM roles on steroids) into AWS SSO

  2. Establish automated linking between these Permission Sets, user Groups, and AWS accounts

  3. Set up a CI/CD pipeline to deploy changes automatically whenever configurations are updated

  4. Package everything neatly as infrastructure-as-code using Terraform

  5. Ensure the entire process was auditable, reproducible, and compliant with security best practices

In essence, I needed to build a self-service platform where different teams could access their assigned AWS accounts with appropriate permission levels, all managed through a central AWS SSO portal. This would replace our existing manual process where identity and access management changes required tickets, manual approvals, and direct console work — a process that typically took days and was prone to human error.

The catch? Making this process as automated and hands-off as possible while maintaining appropriate security controls. As AWS SSO lacks certain API capabilities for user and group provisioning (as of this writing), this part would remain a console operation performed by our security team. However, everything else was fair game for automation.

The challenge was further complicated by our organization’s strict security requirements:

  • All infrastructure changes must be tracked and auditable

  • Deployments must require explicit approvals

  • The entire solution must be version-controlled

  • No permanent admin credentials should be used in the pipeline

This is precisely the type of complex, multi-faceted DevOps problem I live for.

Assembling My AWS DevOps Arsenal

For this operation, I enlisted four core AWS services to create a seamless CI/CD experience:

  • CodeCommit: My code repository (AWS’s equivalent of GitHub)

  • CodeBuild: My execution environment for running Terraform commands

  • CodePipeline: The orchestrator connecting all the pieces

  • S3: My artifact storage system

Each of these services has its own strengths and limitations that influenced my architectural decisions:

CodeCommit offers tight integration with other AWS services but lacks many developer-friendly features found in GitHub or GitLab. While it supports branch policies and approval rules, its web interface for code reviews is quite basic. However, it does provide IAM-based access control, which aligns perfectly with our security requirements and eliminates the need for SSH keys or personal access tokens.

CodeBuild runs your build commands in isolated Docker containers, which is excellent for consistency but introduces challenges with filesystem operations like symbolic links (which I’ll discuss later). It supports various compute types ranging from 3GB to 64GB of memory, though I found the smallest tier more than sufficient for our Terraform operations.

CodePipeline orchestrates the entire process but follows a linear execution model with limited branching capabilities. It allows for manual approval steps, which was crucial for our compliance requirements, but lacks some advanced features like parallel execution paths or conditional stages.

S3 serves as a simple yet effective artifact repository, with versioning capabilities that create an audit trail of all our infrastructure changes.

Let’s see how I assembled these pieces into a coherent infrastructure automation puzzle.

First Steps: Setting Up CodeCommit

While my organization already uses GitHub for most projects, this particular project needed to live within our AWS ecosystem for security and compliance reasons. Setting up a CodeCommit repository is straightforward through the AWS console — much simpler than configuring a full-featured GitHub organization with proper permissions.

After creating a new repository (I named mine sso-permission-sets), the critical step is capturing the Clone URL for local development:

  1. Click on the repository name in the CodeCommit console

  2. Open the “Clone URL” dropdown in the upper right

  3. Select “Clone HTTPS (GRC)” — this is important as it uses the git-remote-codecommit helper

The resulting URL contains your region and repository name, looking something like https://git-codecommit.eu-west-2.amazonaws.com/v1/repos/sso-permission-sets. This format threw me off initially - it's not the standard Git URL format you might be used to. Later, you'll notice we use a different format altogether with the git-remote-codecommit helper.

One nice aspect of CodeCommit is that it leverages your existing AWS authentication mechanisms rather than requiring separate SSH keys or personal access tokens. This simplifies credential management, especially in enterprise environments where key rotation policies are strictly enforced.

Creating My Local Development Environment

To effectively work with this setup, I needed a well-configured local development environment with several tools properly installed and configured. Here’s my detailed setup process:

Installing and Configuring Required Tools

First, I made sure I had AWS CLI v2 installed, as v1 lacks some of the SSO functionality needed:

aws --version
Enter fullscreen mode Exit fullscreen mode

If you’re on macOS, you can install it with:

brew install awscli
Enter fullscreen mode Exit fullscreen mode

Next came the most crucial part — configuring SSO access for seamless authentication. This was essential as my organization had moved away from long-lived IAM access keys to temporary credentials via SSO:

aws configure sso --profile terraform-deployer
Enter fullscreen mode Exit fullscreen mode

This interactive command prompted me for:

  • The SSO start URL (our company portal)

  • AWS region for SSO (ap-south-1 in our case)

  • Default output format (json)

  • Default AWS account and permission set (I selected our infrastructure account and PowerUser role)

Once completed, a browser window opened for SSO authentication. With this configuration saved, I could now authenticate simply by running:

aws sso login --profile terraform-deployer
Enter fullscreen mode Exit fullscreen mode

This generates temporary credentials valid for 8 hours — perfect for a development session without compromising security.

Setting Up Terraform and Version Management

For Terraform, I needed version 0.15 or higher to leverage newer features like improved plan file handling:

terraform --version
# Should return: Terraform v0.15.5 or higher
Enter fullscreen mode Exit fullscreen mode

Since I juggle multiple client projects with different Terraform version requirements, I use tfenv, which is essentially nvm but for Terraform:

brew install tfenv
Enter fullscreen mode Exit fullscreen mode
# Install specific Terraform version
tfenv install 0.15.5# Set as default
tfenv use 0.15.5# Verify installed versions
tfenv list
# Shows: * 0.15.5 (set by /Users/shivanshu.sharma/.tfenv/version)
#        0.14.11
#        0.13.7
Enter fullscreen mode Exit fullscreen mode

Connecting to CodeCommit

Finally, to work with CodeCommit, I needed Git and the specialized CodeCommit helper that integrates with AWS authentication:

# Verify Git version
git --version
# Should be 1.7.9 or higher
Enter fullscreen mode Exit fullscreen mode
# Install the CodeCommit helper
python3 -m pip install git-remote-codecommit
Enter fullscreen mode Exit fullscreen mode

The git-remote-codecommit package is critical — it enables Git to understand and use the special codecommit:// URL scheme that integrates with AWS authentication. Without it, connecting to CodeCommit repositories becomes much more complicated.

With VS Code (plus the HashiCorp Terraform extension) as my IDE, I cloned the empty repository using the special codecommit format:

git clone codecommit::ap-south-1://terraform-deployer@sso-permission-sets
cd sso-permission-sets && code .
Enter fullscreen mode Exit fullscreen mode

Notice the URL format: codecommit::<region>://<profile>@<repository>. This leverages the AWS credentials associated with the specified profile, eliminating the need for separate Git credentials.

Validating My Setup

To ensure everything was working correctly, I created a simple README.md file and pushed it to the repository:

echo "# AWS SSO Permission Sets" > README.md
git add README.md
git commit -m "Initial commit"
git push
Enter fullscreen mode Exit fullscreen mode

After refreshing the CodeCommit console, I could see my README appeared correctly — confirmation that my local development environment was properly configured and ready for the real work ahead.

Repository Structure and Organization

Let me share how I organized my code:

sso-permission-sets/
├── configurations/
│   ├── permission_sets_pipeline/
│   │   ├── codepipeline.tf
│   │   ├── iam.tf
│   │   ├── codebuild.tf
│   │   ├── s3.tf
│   │   ├── buildspec-plan.yml
│   │   └── buildspec-apply.yml
│   ├── global.tf
│   ├── output.tf
│   ├── provider.tf
│   ├── version.tf
│   └── state.tf
Enter fullscreen mode Exit fullscreen mode

I placed common configuration files at the root level and linked them into specific configuration directories. One quirk I discovered: AWS CodeBuild doesn’t handle symbolic links well, so I ultimately copied these files directly instead of linking them.

Orchestrating the CI/CD Pipeline

The heart of my solution lives in codepipeline.tf, where I defined a four-stage pipeline:

resource "aws_codepipeline" "codepipeline" {
  name     = var.pipeline_name
  role_arn = aws_iam_role.codepipeline_role.arn
Enter fullscreen mode Exit fullscreen mode
  artifact_store {
    location = aws_s3_bucket.codepipeline_bucket.bucket
    type     = "S3"
  }  # Stage 1: Clone the repository
  stage {
    name = "Clone"    action {
      name             = "Source"
      category         = "Source"
      owner            = "AWS"
      provider         = "CodeCommit"
      version          = "1"
      output_artifacts = ["CodeWorkspace"]      configuration = {
        RepositoryName       = var.repository_name
        BranchName           = var.branch_name
        PollForSourceChanges = "true"
      }
    }
  }  # Stage 2: Run terraform plan
  stage {
    name = "Plan"    action {
      name             = "Plan"
      category         = "Build"
      owner            = "AWS"
      provider         = "CodeBuild"
      input_artifacts  = ["CodeWorkspace"]
      output_artifacts = ["TerraformPlanFile"]
      version          = "1"      configuration = {
        ProjectName = aws_codebuild_project.plan_project.name
        EnvironmentVariables = jsonencode([
          {
            name  = "PIPELINE_EXECUTION_ID"
            value = "#{codepipeline.PipelineExecutionId}"
            type  = "PLAINTEXT"
          }
        ])
      }
    }
  }  # Stage 3: Require manual approval
  stage {
    name = "Manual-Approval"    action {
      name     = "Approval"
      category = "Approval"
      owner    = "AWS"
      provider = "Manual"
      version  = "1"
    }
  }  # Stage 4: Apply the terraform changes
  stage {
    name = "Apply"    action {
      name            = "Deploy"
      category        = "Build"
      owner           = "AWS"
      provider        = "CodeBuild"
      input_artifacts = ["CodeWorkspace", "TerraformPlanFile"]
      version         = "1"      configuration = {
        ProjectName     = aws_codebuild_project.apply_project.name
        PrimarySource   = "CodeWorkspace"
        EnvironmentVariables = jsonencode([
          {
            name  = "PIPELINE_EXECUTION_ID"
            value = "#{codepipeline.PipelineExecutionId}"
            type  = "PLAINTEXT"
          }
        ])
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

The pipeline flow is elegantly simple:

  1. Clone: Fetch the latest code from CodeCommit

  2. Plan: Run terraform plan and save the output as an S3 artifact

  3. Manual-Approval: Allow human verification of proposed changes

  4. Apply: Run terraform apply using the saved plan file

A few implementation insights I discovered along the way:

  • Artifact names should avoid hyphens to prevent ambiguous Terraform errors (use camelCase or snake_case instead)

  • CodePipeline variables can be accessed using the #{codepipeline.VariableName} syntax

  • For the Apply stage, since it uses multiple input artifacts, I had to specify which one should be the primary source

Setting Up CodeBuild Projects

For each CodeBuild stage (Plan and Apply), I needed to define a build project:

resource "aws_codebuild_project" "plan_project" {
  name          = "${var.pipeline_name}-plan"
  description   = "Plan stage for ${var.pipeline_name}"
  build_timeout = "5"
  service_role  = aws_iam_role.codebuild_role.arn
Enter fullscreen mode Exit fullscreen mode
  artifacts {
    type = "CODEPIPELINE"
  }  environment {
    compute_type                = var.build_compute_type
    image                       = var.build_image
    type                        = var.build_container_type
    image_pull_credentials_type = "CODEBUILD"
  }  source {
    type      = "CODEPIPELINE"
    buildspec = file("${path.module}/buildspec-plan.yml")
  }
}resource "aws_codebuild_project" "apply_project" {
  name          = "${var.pipeline_name}-apply"
  description   = "Apply stage for ${var.pipeline_name}"
  build_timeout = "5"
  service_role  = aws_iam_role.codebuild_role.arn  artifacts {
    type = "CODEPIPELINE"
  }  environment {
    compute_type                = var.build_compute_type
    image                       = var.build_image
    type                        = var.build_container_type
    image_pull_credentials_type = "CODEBUILD"
  }  source {
    type      = "CODEPIPELINE"
    buildspec = file("${path.module}/buildspec-apply.yml")
  }
}
Enter fullscreen mode Exit fullscreen mode

The environment variables in my global.tf defined the compute specifications:

variable "build_compute_type" {
  description = "CodeBuild compute type"
  type        = string
  default     = "BUILD_GENERAL1_SMALL"
}
Enter fullscreen mode Exit fullscreen mode
variable "build_image" {
  description = "CodeBuild container image"
  type        = string
  default     = "aws/codebuild/amazonlinux2-x86_64-standard:3.0"
}variable "build_container_type" {
  description = "CodeBuild container type"
  type        = string
  default     = "LINUX_CONTAINER"
}
Enter fullscreen mode Exit fullscreen mode

Crafting the BuildSpec Files

The real magic happens in the buildspec files that tell CodeBuild what to do. Here’s my plan buildspec:

version: 0.2
Enter fullscreen mode Exit fullscreen mode
env:
  variables:
    TF_VERSION: "0.15.5"
    PERMISSION_SETS_DIR: "configurations/permission_sets_pipeline"phases:
  install:
    runtime-versions:
      python: 3.8
    commands:
      - echo "Installing terraform version ${TF_VERSION}"
      - curl -s -qL -o terraform.zip "https://releases.hashicorp.com/terraform/${TF_VERSION}/terraform_${TF_VERSION}_linux_amd64.zip"
      - unzip terraform.zip
      - mv terraform /usr/bin/terraform
      - rm terraform.zip  build:
    commands:
      - echo "Starting build phase"
      - cd ${CODEBUILD_SRC_DIR}

      # Copy linked files to make sure they're available
      - cp ${CODEBUILD_SRC_DIR}/configurations/global.tf ${CODEBUILD_SRC_DIR}/${PERMISSION_SETS_DIR}/
      - cp ${CODEBUILD_SRC_DIR}/configurations/provider.tf ${CODEBUILD_SRC_DIR}/${PERMISSION_SETS_DIR}/
      - cp ${CODEBUILD_SRC_DIR}/configurations/version.tf ${CODEBUILD_SRC_DIR}/${PERMISSION_SETS_DIR}/
      - cp ${CODEBUILD_SRC_DIR}/configurations/state.tf ${CODEBUILD_SRC_DIR}/${PERMISSION_SETS_DIR}/
      - cp ${CODEBUILD_SRC_DIR}/configurations/output.tf ${CODEBUILD_SRC_DIR}/${PERMISSION_SETS_DIR}/

      - cd ${CODEBUILD_SRC_DIR}/${PERMISSION_SETS_DIR}
      - terraform init
      - terraform validate
      - terraform plan -out=tfplan_commitid_${CODEBUILD_RESOLVED_SOURCE_VERSION}_pipelineid_${PIPELINE_EXECUTION_ID}

artifacts:
  files:
    - ${PERMISSION_SETS_DIR}/tfplan_commitid_${CODEBUILD_RESOLVED_SOURCE_VERSION}_pipelineid_${PIPELINE_EXECUTION_ID}
  name: TerraformPlanFile
Enter fullscreen mode Exit fullscreen mode

And the apply buildspec:

version: 0.2
Enter fullscreen mode Exit fullscreen mode
env:
  variables:
    TF_VERSION: "0.15.5"
    PERMISSION_SETS_DIR: "configurations/permission_sets_pipeline"phases:
  install:
    runtime-versions:
      python: 3.8
    commands:
      - echo "Installing terraform version ${TF_VERSION}"
      - curl -s -qL -o terraform.zip "https://releases.hashicorp.com/terraform/${TF_VERSION}/terraform_${TF_VERSION}_linux_amd64.zip"
      - unzip terraform.zip
      - mv terraform /usr/bin/terraform
      - rm terraform.zip  build:
    commands:
      - echo "Starting build phase"
      - cd ${CODEBUILD_SRC_DIR}

      # Copy linked files to make sure they're available
      - cp ${CODEBUILD_SRC_DIR}/configurations/global.tf ${CODEBUILD_SRC_DIR}/${PERMISSION_SETS_DIR}/
      - cp ${CODEBUILD_SRC_DIR}/configurations/provider.tf ${CODEBUILD_SRC_DIR}/${PERMISSION_SETS_DIR}/
      - cp ${CODEBUILD_SRC_DIR}/configurations/version.tf ${CODEBUILD_SRC_DIR}/${PERMISSION_SETS_DIR}/
      - cp ${CODEBUILD_SRC_DIR}/configurations/state.tf ${CODEBUILD_SRC_DIR}/${PERMISSION_SETS_DIR}/
      - cp ${CODEBUILD_SRC_DIR}/configurations/output.tf ${CODEBUILD_SRC_DIR}/${PERMISSION_SETS_DIR}/

      - cd ${CODEBUILD_SRC_DIR}/${PERMISSION_SETS_DIR}
      - terraform init
      - cp ${CODEBUILD_SRC_DIR_TerraformPlanFile}/configurations/permission_sets_pipeline/tfplan_commitid_${CODEBUILD_RESOLVED_SOURCE_VERSION}_pipelineid_${PIPELINE_EXECUTION_ID} .
      - terraform apply -auto-approve tfplan_commitid_${CODEBUILD_RESOLVED_SOURCE_VERSION}_pipelineid_${PIPELINE_EXECUTION_ID}
Enter fullscreen mode Exit fullscreen mode

A critical discovery I made: when dealing with multiple artifacts in CodeBuild, you need to use CODEBUILD_SRC_DIR_ArtifactName to access secondary artifacts. This was the trickiest part of the whole setup!

Creating the S3 Artifact Bucket

Finally, I needed a place to store my pipeline artifacts:

resource "aws_s3_bucket" "codepipeline_bucket" {
  bucket = var.artifact_bucket
}
Enter fullscreen mode Exit fullscreen mode
resource "aws_s3_bucket_acl" "codepipeline_bucket_acl" {
  bucket = aws_s3_bucket.codepipeline_bucket.id
  acl    = "private"
}resource "aws_s3_bucket_server_side_encryption_configuration" "example" {
  bucket = aws_s3_bucket.codepipeline_bucket.id  rule {
    apply_server_side_encryption_by_default {
      sse_algorithm = "AES256"
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Nothing fancy here — just a private, encrypted bucket for storing our Terraform plans and other artifacts.

Final Result and Cost Analysis

After applying all these configurations, I had a fully functional AWS-native CI/CD pipeline that would:

  1. Detect changes to the repository

  2. Plan the Terraform changes

  3. Wait for human approval

  4. Apply the changes automatically

The best part? The entire solution costs approximately $3.01 per month if you run it once daily. That’s less than a fancy coffee!

Overcoming Unexpected Challenges

While the solution appears straightforward now, I encountered several unexpected obstacles along the way:

Dynamic State Management

My first attempt used a local state file, which quickly proved problematic in a CI/CD environment. I pivoted to using S3 backend with DynamoDB locking to ensure state consistency:

terraform {
  backend "s3" {
    bucket         = "terraform-state-xxcloud-infra"
    key            = "sso-permission-sets/terraform.tfstate"
    region         = "ap-south-1"
    dynamodb_table = "terraform-state-lock"
    encrypt        = true
  }
}
Enter fullscreen mode Exit fullscreen mode

Handling Permission Boundaries

Our organization requires permission boundaries on all IAM roles. The IAM roles created by CodeBuild and CodePipeline needed these boundaries applied, which required some additional configurations:

resource "aws_iam_role" "codebuild_role" {
  name                 = "${var.pipeline_name}-codebuild-role"
  assume_role_policy   = data.aws_iam_policy_document.codebuild_assume_policy.json
  permissions_boundary = "arn:aws:iam::${local.account_id}:policy/StandardPermissionsBoundary"
}
Enter fullscreen mode Exit fullscreen mode

Cross-Account Pipeline Considerations

While my initial implementation worked within a single account, we later expanded it to deploy permission sets across multiple accounts. This required adjusting the trust relationships and adding cross-account role assumptions to the pipeline.

Security Considerations

Security was paramount in this implementation:

  1. Least Privilege Access: The IAM roles for CodeBuild and CodePipeline have tightly scoped permissions

  2. Artifact Encryption: All pipeline artifacts are encrypted at rest in S3

  3. Manual Approval Gates: Changes require explicit human approval before deployment

  4. Auditable History: Every change is trackable through Git history and pipeline execution logs

  5. Temporary Credentials: No long-lived access keys are used in the pipeline

Cost Analysis and Optimization

One pleasant surprise was the cost-effectiveness of this solution. The entire infrastructure costs approximately $4 to $5 per month with daily executions.

We could further optimize by using EventBridge to trigger the pipeline only on actual changes rather than polling for changes, but the cost savings would be minimal.

Conclusion and Key Takeaways

Building this AWS-native CI/CD pipeline was a fascinating journey that expanded my DevOps toolkit. While AWS’s CI/CD services don’t yet match the polish of dedicated solutions like Jenkins or GitHub Actions, they integrate beautifully with other AWS services and provide a cost-effective approach to automating infrastructure deployments.

The most valuable lessons I learned:

  1. Filesystem Quirks: CodeBuild has peculiar behavior with symbolic links and secondary artifacts

  2. Environment Variable Magic: Use pipeline execution IDs and commit hashes to create unique artifact names

This infrastructure-as-code approach to managing AWS SSO permission sets has dramatically improved our operational efficiency. What used to take days of manual work and coordination now happens automatically in minutes with full traceability and security controls.

Have you built similar pipelines with AWS’s native services? What challenges did you encounter?

I’d love to hear about your experiences and any optimization tips you might have!

AWS Security LIVE! Stream

Go beyond the firewall

Watch AWS Security LIVE! to uncover how today’s cybersecurity teams secure what matters most.

Learn More

Top comments (2)

Collapse
 
jatinmehrotra profile image
Jatin Mehrotra

Isn't CodeCommit discontinued?

Collapse
 
shivanshu-sharma profile image
Shivanshu Sharma

AWS CodeCommit is no longer available for new customers, but existing users can continue to use their repositories.

Create a simple OTP system with AWS Serverless cover image

Create a simple OTP system with AWS Serverless

Implement a One Time Password (OTP) system with AWS Serverless services including Lambda, API Gateway, DynamoDB, Simple Email Service (SES), and Amplify Web Hosting using VueJS for the frontend.

Read full post

👋 Kindness is contagious

If you enjoyed this article, please tap the ❤️ or leave a thoughtful comment below!

Let’s Go