Back to posts

AWS Journey: Using S3 in CI/CD Pipeline

June 12, 2025

After optimizing how we build and run Docker containers, I ran into another bottleneck in the CI/CD process: managing deployment configuration files. I've used AWS S3 before at my previous job, mainly for storing product images and static assets. But as I've been preparing for the AWS Developer Associate certification, I started thinking differently about the tools I thought I already knew.

Why Use S3 in CI/CD?

  1. Current Challenges:

    • ❌ EC2 must be online → If the instance is stopped or unreachable, deployment fails.
    • ❌ Security risk → Requires storing SSH private keys in CI/CD pipeline.
  2. Benefits of S3 Integration:

    • ✅ S3 is always available → EC2 can fetch the file at startup, even if it was offline during deployment.
    • ✅ No SSH required → This removes the need to store private SSH keys in GitLab CI/CD.

Implementation Steps

1. Create S3 Bucket

First, let's create a dedicated S3 bucket for our docker compose file:

  • Access AWS Console > Amazon S3
  • Click "Create bucket"
  • Name: myapp-config-deployment
  • Object Owner: ACLs disabled (recommended)
  • Region: Same as your EC2 instances
  • Block all public access: ✅
  • Bucket Versioning: Enabled
  • Tags: Optional, so you can skip it
  • Server-side encryption with Amazon S3 managed keys (SSE-S3)
  • Bucket Key: Enabled
  • Click "Create Bucket"
Create S3 Bucket creating S3 bucket with proper settings

2. Update GitLab CI/CD Variables

  • Type: variable (default)
  • Environments: all (default)
  • Visibility: Visible (bucket names are not sensitive)
  • Key: S3_BUCKET
  • Flags (Protect variable): ✅
  • Flags (Expand variable reference): ✅
  • Description (optional): You can leave it blank
  • Key: S3_BUCKET
  • Value: s3://myapp-config-deployment or s3://your-bucket-name

Why Add S3_BUCKET to GitLab CI/CD Variable?

  • Keeps bucket name configurable without code changes
  • Follows best practices
  • Prevents hardcoding bucket names in pipeline configuration
Update Gitlab CI/CD Variable update Gitlab CI/CD variable

3. Update GitLab CI/CD script

Let's modify our deploy stage in ci/cd pipeline to upload configuration files to S3:

deploy:
  stage: deploy
  tags:
    - docker
  needs:
    - build
  before_script:
    - apk add --no-cache aws-cli
  script:
    - |
      echo "Starting deployment."
      echo "Commit ID: $IMAGE_TAG"
      
      # Upload compose file to a fixed path (S3 versioning will handle versions)
      aws s3 cp docker-compose.prod.yml "$S3_BUCKET/docker-compose.prod.yml"

    - |
      # Update Parameter Store with current commit SHA
      aws ssm put-parameter \
        --name "/myapp/config/image-tag" \
        --value "$IMAGE_TAG" \
        --type String \
        --overwrite
      STORED_TAG=$(aws ssm get-parameter \
        --name "/myapp/config/image-tag" \
        --query "Parameter.Value" \
        --output text)
      
      if [ "$STORED_TAG" != "$IMAGE_TAG" ]; then
        echo "❌ Parameter update failed!"
        exit 1
      fi
      echo "✅ Parameter successfully updated !!!"
    - |
      COMMAND_ID=$(aws ssm send-command \
        --instance-ids $EC2_INSTANCE_ID \
        --document-name "AWS-RunShellScript" \
        --parameters '{"commands":[
          "DIRECTORY_APP=$(aws ssm get-parameter --name "/myapp/config/directory-app" --query "Parameter.Value" --output text)",
          "cd $DIRECTORY_APP",
          "rm -f docker-compose.prod.yml",
          "S3_BUCKET=$(aws ssm get-parameter --name "/myapp/config/s3-bucket-name" --query "Parameter.Value" --output text)",
          "aws s3 cp $S3_BUCKET/docker-compose.prod.yml docker-compose.prod.yml",

          

          "echo "We use AWS Parameter Store to fetch secrets"",
          "REGION=$(aws ec2 describe-availability-zones --output text --query 'AvailabilityZones[0].[RegionName]')",
          
          "SECRET_NAME=$(aws ssm get-parameter --name "/myapp/config/secret-name" --query "Parameter.Value" --output text)",
          "IMAGE_TAG=$(aws ssm get-parameter --name "/myapp/config/image-tag" --query "Parameter.Value" --output text)",
          "secrets=$(aws secretsmanager get-secret-value --secret-id $SECRET_NAME --region $REGION --query "SecretString" --output text | jq .)",
          "echo "Secrets fetched successfully 🎉"",
          "export DB_HOST=$(echo $secrets | jq -r ".database.DB_HOST")",
          "export DB_USER=$(echo $secrets | jq -r ".database.DB_USER")",
          "export DB_PORT=$(echo $secrets | jq -r ".database.DB_PORT")",
          "export DB_PASSWORD=$(echo $secrets | jq -r ".database.DB_PASSWORD")",
          "export DB_ROOT_PASSWORD=$(echo $secrets | jq -r ".database.DB_ROOT_PASSWORD")",
          "export DB_NAME=$(echo $secrets | jq -r ".database.DB_NAME")",
          "export AWS_REGION=$REGION",
          "export ECR_REPOSITORY_URL=$(echo $secrets | jq -r ".aws.ECR_REPOSITORY_URL")",
          "export IMAGE_TAG=$IMAGE_TAG",
          "export PORT=$(echo $secrets | jq -r ".app.PORT")",
          "export GIN_MODE=$(echo $secrets | jq -r ".app.GIN_MODE")",
          "aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $ECR_REPOSITORY_URL",
          "docker pull $ECR_REPOSITORY_URL:$IMAGE_TAG || true",
          "if ! docker ps --filter "name=mysql-prod" --filter "status=running" | grep -q "mysql-prod"; then",
          "  echo "Starting mysql service..."",
          "  DB_USER=$DB_USER DB_ROOT_PASSWORD=$DB_ROOT_PASSWORD docker-compose -f docker-compose.prod.yml up -d mysql",
          "else",
          "  echo "MySQL is already running ✅"",
          "fi",
          "docker-compose -f docker-compose.prod.yml stop go",
          "docker-compose -f docker-compose.prod.yml rm -f go",
          "docker images $ECR_REPOSITORY_URL -q | grep -v $IMAGE_TAG | xargs -r docker rmi -f || true",
          "DB_HOST=$DB_HOST DB_USER=$DB_USER DB_PORT=$DB_PORT DB_PASSWORD=$DB_PASSWORD DB_ROOT_PASSWORD=$DB_ROOT_PASSWORD DB_NAME=$DB_NAME PORT=$PORT GIN_MODE=$GIN_MODE ECR_REPOSITORY_URL=$ECR_REPOSITORY_URL IMAGE_TAG=$IMAGE_TAG docker-compose -f docker-compose.prod.yml up -d go",
          "echo "Deployment completed successfully 🎉""
        ]}' \
        --output text \
        --query "Command.CommandId")
    - |
      echo "Waiting for command completion..."
      while true; do
        STATUS=$(aws ssm list-commands \
          --command-id "$COMMAND_ID" \
          --query "Commands[0].Status" \
          --output text)  
        
        if [ "$STATUS" = "Success" ]; then
          echo "Command completed successfully"
          break
        elif [ "$STATUS" = "Failed" ]; then
          echo "Command failed"
          break
        fi
        
        echo "Status: $STATUS"
        sleep 3
      done
    - |
      aws ssm get-command-invocation \
        --command-id "$COMMAND_ID" \
        --instance-id "$EC2_INSTANCE_ID" \
        --query "StandardOutputContent" \
        --output text
  rules:
    - if: $CI_COMMIT_BRANCH == "main" && $CI_PIPELINE_SOURCE == "push"
      when: on_success

4. Create Parameter for S3 Bucket

Why Parameter Store?

  • Securely store the S3 bucket name
  • Enable EC2 to know which bucket to pull from
  • Access AWS Console > Parameter Store
  • Click "Create parameter"
  • Set parameter name as /myapp/config/s3-bucket-name or anything else but make sure it uses forward slash (/) because it is recommended by AWS
  • Leave description empty because it is not required
  • Choose "Standard" tier
  • Set parameter type as "String"
  • Set data type as "Text"
  • Set parameter value like s3://myapp-config-deployment or s3://your-bucket-name
  • Leave tags empty because it is not required
  • Click "Create Parameter"
Create Parameter for S3 bucket create Parameter for S3 bucket

5. Update IAM Role with custom policy

We need to update our current IAM Role so our EC2 can upload and download docker-compose.prod.yml file from S3 bucket.

  • Access AWS Console > IAM > Roles
  • Click our existing IAM Role that we've been using so far EC2RoleDemo
  • Click "Add Permissions" > "Create inline policy"
  • Choose JSON and add:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "s3:PutObject",
      ],
      "Resource": [
        "arn:aws:s3:::myapp-config-deployment",
        "arn:aws:s3:::myapp-config-deployment/*"
      ]
    }
  ]
 }
  • Click "Next"
  • Add policy name
  • Click "Create Policy"

Why Custom Policy?

  • Follows AWS security best practices
  • Implements principle of least privilege
  • Provides clear documentation of permissions
  • Makes security audits easier
  • Reduces potential security risks
Update IAM Role update IAM role

6. Testing the Setup

Let's verify our S3 integration:

Push a Change:

  • Make a change in your codebase

Check S3 Bucket:

  • Navigate to S3 bucket
  • Verify files are uploaded
  • Check versioning works

Verify EC2:

  • SSH into EC2
  • Check config files are updated
  • Verify application is running with new config

Benefits Achieved

  1. Centralized Storage:

    • All configs in one place
    • Easy to manage multiple environments
    • Version control for configurations
  2. Rollback Capability:

    • Each config version preserved
    • Quick recovery from issues
    • Full audit trail
  3. Security:

    • No public access
    • IAM role-based access
    • Encrypted storage

🔗 Resources

Demo Repository

Complete implementation available in our demo repository

Official Documentation

📈 Next Steps: Implementing CloudWatch Monitoring

Our next focus will be:

  • Setting up CloudWatch