• About Us
  • Blogs
  • Contact Us
Deploying a Containerized Application with Amazon EKS, AWS Fargate, and GitOps Practices

At SUDO Consultants, we’ve guided businesses through IT transformations for years, and one recurring need stands out: simplifying application deployments while maintaining scalability and efficiency. Containerization and automation are now essential tools for organizations aiming to stay competitive in today’s fast-paced environment.

This guide reflects our expertise and demonstrates how Amazon EKS, AWS Fargate, and GitHub Actions address common challenges in modern deployments. Let’s dive in and explore how these solutions can transform your deployment workflows.

Why EKS, Fargate, and GitHub Actions?

Amazon EKS, Fargate, and GitHub Actions form a powerful trifecta for modern application deployment:

  • Amazon EKS: Simplifies Kubernetes management by abstracting complex tasks, enabling organizations to deploy, scale, and manage containerized applications effortlessly.
  • AWS Fargate: Provides serverless compute, allowing businesses to focus on application development rather than infrastructure maintenance.
  • GitHub Actions: Facilitates CI/CD automation by integrating seamlessly with development workflows, reducing manual interventions and accelerating delivery pipelines.

This combination empowers businesses to deploy applications rapidly, securely, and at scale, while maintaining cost efficiency.

How This Guide Helps

By leveraging Amazon EKS, AWS Fargate, and GitHub Actions, you’ll learn how to:

  • Deploy containerized applications in a scalable and cost-effective manner.
  • Automate build, test, and deployment processes for faster delivery.
  • Adopt best practices in modern cloud-native deployments.

At SUDO Consultants, we specialize in helping businesses navigate the complexities of AWS services, ensuring that their cloud infrastructure aligns with their goals. This step-by-step guide reflects our expertise in creating seamless, efficient, and scalable cloud solutions for clients across industries.

Step-by-Step Deployment Guide

Figure 1: Architectural Flow Diagram of the App Deployment on Amazon Web Services

1. Prerequisites

  • AWS account with administrative permissions.
  • GitHub account for repository hosting.
  • Installed tools:
    • AWS CLI
    • kubectl
    • eksctl
    • Docker
  • Local development environment (e.g., Visual Studio Code).
  • IAM user with necessary permissions to interact with Amazon EKS, Fargate, and ECR.

Tip: When granting IAM permissions for Amazon EKS, Fargate, and ECR, it is essential to follow the principle of least privilege to ensure users have only the permissions necessary for their tasks. This reduces the risk of unauthorized actions, enhances security by safeguarding critical resources, and minimizes exposure to vulnerabilities. Adhering to least privilege also helps maintain compliance with access control requirements in regulatory frameworks.

2. Setting Up a Simple Node.js Application

2.1. Create the Node.js Application

  1. Initialize a new Node.js project:
mkdir nodejs-eks-app && cd nodejs-eks-app
npm init -y
  1. Install express:
npm install express
  1. Create an index.js file:
const express = require('express');
const app = express();
const PORT = process.env.PORT || 3000;

app.get('/', (req, res) => res.send('Hello, AWS Fargate!'));

app.listen(PORT, () => console.log(`Server running on port ${PORT}`));

Figure 3: index.js file

  1. Add a Dockerfile to containerize the app:
FROM node:14
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["node", "index.js"]

Figure 4: Dockerfile


3. Building and Pushing the Docker Image to Amazon ECR

3.1. Create an Amazon ECR Repository

  1. In the AWS Management Console, go to ECRRepositoriesCreate Repository.
  2. Name it nodejs-eks-app.

3.2. Authenticate Docker with ECR

Use the AWS CLI to authenticate Docker:

aws ecr get-login-password --region <region> | docker login --username AWS --password-stdin <account_id>.dkr.ecr.<region>.amazonaws.com

3.3. Build and Push the Docker Image

  1. Build the Docker image:
docker build -t nodejs-eks-app .
  1. Tag the image for ECR:
docker tag nodejs-eks-app:latest <account_id>.dkr.ecr.<region>.amazonaws.com/nodejs-eks-app:latest
  1. Push the image:
docker push <account_id>.dkr.ecr.<region>.amazonaws.com/nodejs-eks-app:latest

Figure 5: Docker image on ECR repository


  1. Setting Up an Amazon EKS Cluster with Fargate

Using Amazon EKS with Fargate offers the advantage of serverless compute, eliminating the need to manage and scale underlying nodes, which simplifies cluster operations. It enhances security by isolating workloads at the pod level and ensures resource optimization by automatically allocating the required CPU and memory for each pod. This combination reduces operational overhead while providing a scalable and cost-efficient environment for running Kubernetes applications

4.1. Create an EKS Cluster

Use eksctl to create the cluster:

eksctl create cluster \
  --name nodejs-cluster \
  --region <region> \
  --fargate

Creating the EKS cluster can take some time (about 5 mins), so be patient with it. At this point, you can confirm that the EKS creation process is on the right track by checking the status of its CloudFormation stack on the AWS Console. 

When the cluster creation is successful, eksctl provides a success message like the one shown below: 

4.2. Verify the Cluster

kubectl get nodes

Figure 6: EKS cluster setup in AWS Management Console.

5. Creating Kubernetes Manifests

Kubernetes manifests are declarative configuration files written in YAML or JSON that define the desired state of Kubernetes resources such as pods, deployments, and services. They allow users to describe how applications and infrastructure should behave, enabling Kubernetes to automatically reconcile the cluster to match these definitions. Using version-controlled Kubernetes manifests aligns with GitOps best practices, as it ensures infrastructure configurations are tracked, auditable, and easily rolled back if necessary. This approach promotes collaboration, consistency across environments, and seamless automation in CI/CD pipelines, making deployments more reliable and predictable.

5.1. Write Deployment and Service YAML Files

  1. Create a k8s-deployment.yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nodejs-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nodejs-app
  template:
    metadata:
      labels:
        app: nodejs-app
    spec:
      containers:
      - name: nodejs-container
        image: 891377191404.dkr.ecr.us-east-1.amazonaws.com/nodejs-eks-app:latest
        ports:
        - containerPort: 3000
  1. Create a k8s-service.yaml file:
apiVersion: v1
kind: Service
metadata:
  name: nodejs-service
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "HTTP"
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
    service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
    service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: "target-type=ip"
spec:
  selector:
    app: nodejs-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: 3000
  type: LoadBalancer

Figure 7: YAML files in a text editor


6. Automating Deployment with GitHub Actions

GitOps is becoming essential for modern infrastructure management, using Git as the single source of truth for declarative configurations. It enhances deployment consistency, simplifies rollbacks, and promotes collaboration by aligning development and operations workflows. As automation and scalability become priorities, GitOps reduces errors and accelerates delivery

6.1. Create a GitHub Repository

Push the application code to a new GitHub repository.

Figure 8: GitHub repository

6.2. Define the GitHub Actions Workflow

  1. Create a .github/workflows/deploy.yml file:
name: CI/CD Pipeline

on:
  push:
    branches:
      - main

jobs:
  build:
    runs-on: ubuntu-latest

    steps:
    - name: Checkout Code
      uses: actions/checkout@v2


    - name: Authenticate AWS
      uses: aws-actions/configure-aws-credentials@v2
      with:
        aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
        aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        aws-region: us-east-1


    - name: Configure kubectl
      run: |
        aws eks update-kubeconfig --region us-east-1 --name nodejs-cluster


    - name: Login to Amazon ECR
      run: |
        aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 891377191404.dkr.ecr.us-east-1.amazonaws.com

    - name: Build and Push Docker Image
      run: |
        docker build -t nodejs-eks-app .
        docker tag nodejs-eks-app:latest 891377191404.dkr.ecr.us-east-1.amazonaws.com/nodejs-eks-app:latest
        docker push 891377191404.dkr.ecr.us-east-1.amazonaws.com/nodejs-eks-app:latest

    - name: Deploy to EKS
      run: |
        kubectl apply -f k8s-deployment.yaml
        kubectl apply -f k8s-service.yaml
  1. Add AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to GitHub Secrets.


GitHub Secrets are used to securely store sensitive information, such as API keys and credentials, ensuring they are not hardcoded into source code. In our case, we’ll use GitHub Secrets to securely pass AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to our workflows, enabling secure authentication with AWS services during CI/CD automation. This approach minimizes security risks while maintaining seamless integration.

Figure 9: AWS Access Keys

6.3. Pushing Code to the Remote Repository

Push the local code to the remote Git repository. 

git branch -M main
git push -u origin main

Once the code is pushed to the main branch, the GitHub Actions workflow defined in .github/workflows/deploy.yml will automatically trigger. This workflow will handle the build, image push to ECR, and deployment to EKS.

Figure 10: Triggered GitHub Actions workflow.

7. Testing and Monitoring the Deployment

  1. Confirm the application is running:
 kubectl get pods
 kubectl get svc

Figure 11: Output of kubectl checking commands

  1. Access the application using the Load Balancer URL.

Figure 12: Application on browser


8. Cleanup

Delete the resources to avoid incurring unnecessary costs:

1. eksctl delete cluster –name nodejs-cluster
2.  

9. Wrap-Up Conclusion for the Guide

Through this guide, we demonstrated the end-to-end deployment of a containerized Node.js application using Amazon EKS, AWS Fargate, and GitHub Actions. By following this process, we achieved a seamless and automated deployment pipeline that highlights the best practices of modern cloud-native application delivery.

This implementation illustrates how AWS services and CI/CD pipelines can reduce operational complexity while enhancing scalability, security, and cost efficiency. This guide is a foundation, and businesses can further optimize these practices for even greater agility and innovation.

Applications Across Industries

The deployment approach outlined in this guide is not limited to a specific industry—it’s a versatile solution that can transform operations across multiple sectors. Here’s how different industries can leverage these practices:

  • E-commerce: Ensure high availability and rapid scaling during seasonal spikes like Black Friday or holiday sales. Automate rollouts of new features without downtime, providing a seamless shopping experience for customers.
  • Healthcare: Deploy HIPAA-compliant applications with enhanced security and scalability, enabling real-time processing of patient data and improving telehealth services.
  • Finance: Build secure, cost-effective platforms for handling real-time transactions, fraud detection, and customer-facing applications with zero downtime.
  • Media and Entertainment: Enable global streaming platforms to deliver consistent, high-performance content delivery while scaling to meet unpredictable viewer demands.
  • Technology Startups: Quickly prototype, test, and deploy applications to remain agile and competitive in the market.

Work With the Experts

At SUDO Consultants, we bring unparalleled expertise in AWS cloud solutions to help businesses optimize their infrastructure and workflows. Whether you’re looking to adopt containerization, streamline deployments, or build highly available systems, our team of AWS-certified professionals ensures your success.

Partnering with SUDO Consultants means working with a team that understands the nuances of cloud-native technologies and can tailor solutions to meet your specific business goals. Contact us today for a consultation and let us help you unlock the full potential of AWS!