Containers

CI/CD with Amazon EKS using AWS App Mesh and Gitlab CI

Using containers brings flexibility, consistency, and portability in a DevOps culture. One of the essential parts of DevOps is creating Continuous Integration and Deployment (CI/CD) pipelines that deliver containerized application code faster and more reliably to production systems. Enabling canary or blue-green deployments in CI/CD pipelines provides more robust testing of the new application versions in a production system with a safe rollback strategy. In this case, a service mesh helps to do canary deployment in a production systems. Thus, service mesh controls and distributes traffic to different versions of the application with variable percentage of user traffic. It gradually increments the user traffic to newer version of the application. If anything goes wrong while incrementing percentage of user traffic, it aborts and rolls back to previous version.

Gitlab, which provides version controlling, CI/CD pipelines, and some other DevOps practices, are used by many AWS customers today in their daily software development cycles. Gitlab brings advantages to visualize and track the CI/CD pipelines when a team deploys containerized applications with canary deployment techniques and service meshes. Existing Gitlab users can also modify their CI/CD pipelines to add a service mesh without changing CI/CD tool.

In this post, you will learn how to deploy an application to Kubernetes in a CI/CD Pipeline using AWS App Mesh. We will use Gitlab as a source code repository and we will build a complete CI/CD pipeline for applications deployed on Amazon Elastic Kubernetes Service (Amazon EKS). In a Gitlab CI/CD pipeline, you will also learn how to deploy application in a canary technique using AWS App Mesh resources.

If you are new to Amazon EKS or AWS App Mesh, check out the Amazon EKS resources and AWS App Mesh resources.

 

Sample Architecture

The following diagram shows the architecture that we are going to implement. This architecture represents a complete CI/CD pipeline that uses a Gitlab CI/CD to automatically coordinate building, testing, and deploying an application on EKS for every commit to the repository. In every commit, Gitlab CI fetches the code, build, and run the unit/integration tests, then use the AWS App Mesh components to deploy the application to EKS in a canary technique.

One of the additional component in this architecture is Amazon DynamoDB. Application version tracking is a required part in continuous application deployments. DynamoDB will keep the current and previous versions of the application in a DynamoDB table. In every deployment step, Gitlab CI will fetch and update DynamoDB versioning table.

 

 

Tutorial

Before we begin, please ensure that you have your own Gitlab account and you have necessary permissions to configure CI/CD pipelines. We will be deploying a sample application to Amazon EKS using Gitlab CI/CD pipeline. While deploying the application, we will use a canary technique with AWS App Mesh. When a developer commits flask code to Gitlab repo, the Gitlab CI will run the tests, build the docker image, and push the image into an Amazon ECR registry. Then, it will deploy to Amazon EKS by executing Kubernetes manifests files.

Here are the steps that we will be taking:

  1. Install prerequisites for infrastructure components
  2. Create an Amazon EKS service
  3. Install Kubernetes dependencies (alb-ingress-controller, appmesh-controller, etc.)
  4. Create a Dynamodb table for CI/CD versioning
  5. Create a repo in Gitlab
  6. Set up your AWS credentials in your GitLab account
  7. Push a sample repo to Gitlab
  8. Test your canary deployment

 

Bootstrapping the Infrastructure

In order to deploy containerized applications in Gitlab CI/CD pipeline to the solution describe above, you need to build following infrastructure steps:

  1. Creating an EKS Cluster that has access to AWS App Mesh
  2. Installing AWS App Mesh controller
  3. Installing ALB Ingress Controller
  4. Creating a DynamoDB table for application versioning

Creating an EKS cluster and installing the controllers

To build out an EKS cluster with AWS App Mesh access, and install the ALB Ingress controller and AWS App Mesh controller into the EKS Cluster, please check out this GitHub page that explains all the required steps. Or you can have a look at this AWS blog for help spinning up EKS clusters and installing controllers.

You will need AWS CLI, jq, Helm, eksctl, and kubectl to build out the infrastructure.

Creating a DynamoDB table for CI/CD Versioning

Version tracking is a critical step in CI/CD cycles and in our case we will be using a DynamoDB table to keep track of the current and previous versions of our Flask application. This will help us when a developer commits their code into Gitlab and triggers our Gitlab CI pipeline, which will now interact with the DynamoDB table to check the version of the application.

export TABLE_NAME=versioning
export REPO_NAME=flask-app

aws dynamodb create-table \    
    --table-name $TABLE_NAME \    
    --attribute-definitions \        
        AttributeName=app_name,AttributeType=S \    
    --key-schema \        
        AttributeName=app_name,KeyType=HASH \
    --provisioned-throughput \        
        ReadCapacityUnits=1,WriteCapacityUnits=1
        
aws ecr create-repository \
    --repository-name $REPO_NAME 

Now, once you have completed infrastructure, you can now focus on application deployment with App Mesh to Kubernetes and Gitlab CI/CD.

 

Gitlab CI/CD pipeline

In order to execute Gitlab CI/CD pipelines, you must have .gitlab-ci.yml within Gitlab repo. Gitlab-ci.yml is a YAML file that lists all the necessary steps to execute actions in Gitlab pipeline.

Before we begin, you can check this repository to have a complete source code.

Gitlab CI pipeline requires a file called as .gitlab-ci.yml to execute all CI/CD stages. The repository, which is declared above, includes .gitlab-ci.yml (as shown below). There is a publish step to build a Docker image and push it to the Amazon ECR registry. The Deploy%X step deploys the application to Kubernetes via AWS App Mesh resources. Finally, kill_previous step kills the old version of the application.

 

image : python:3.8.6-alpine3.11

variables:
  DOCKER_DRIVER: overlay2
  APP_NAME: appmesh-app

default:
  before_script:
      - |
          apk --update add curl=7.67.0-r1 jq=1.6-r0 bash=5.0.11-r1              
          pip --no-cache-dir install --upgrade pip awscli==1.18.154 && aws --version && rm -rf /var/cache/apk/*  

          export KUBECTL_VERSION=v1.19.0
          curl -L -o /usr/bin/kubectl https://storage.googleapis.com/kubernetes-release/release/${KUBECTL_VERSION}/bin/linux/amd64/kubectl
          chmod +x /usr/bin/kubectl 
          kubectl version --client 

          curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
          mv /tmp/eksctl /usr/local/bin
          eksctl version
stages:
  - test
  - publish
  - deploy
  - kill

unit-test:
  stage: test
  script:
    - sleep 3
    - echo "unit-tests executed"

integration-test:
  stage: test
  script:
    - sleep 3
    - echo "integration-tests executed"


push_to_ecr:
  stage: publish
  image: docker:latest
  services:
    - docker:dind
  before_script:
      - |
          docker --version
          apk add --no-cache curl jq python3 py-pip && pip install awscli==1.18.154
  script:
    - $(aws ecr get-login --no-include-email --region $AWS_DEFAULT_REGION)
    - export ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
    - echo "ACCOUNT_ID= $ACCOUNT_ID"
    - export REPOSITORY_URL=$ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/flask-app:${CI_PIPELINE_ID} && echo $REPOSITORY_URL
    - cd src/
    - docker build -t $REPOSITORY_URL .
    - docker push $REPOSITORY_URL
    - docker tag $REPOSITORY_URL $ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/flask-app:latest
    - docker push $ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/flask-app:latest  

deploy%25:
  stage: deploy
  when: manual
  script:
    - export INIT_WEIGHT=75 && export NEW_WEIGHT=25
    - eksctl utils write-kubeconfig --kubeconfig kubeconfig-$CLUSTER_NAME.yaml --cluster $CLUSTER_NAME --region $AWS_DEFAULT_REGION
    - export KUBECONFIG=${PWD}/kubeconfig-$CLUSTER_NAME.yaml
    - chmod 775 pipeline.sh && ./pipeline.sh deploy

deploy%50:
  stage: deploy
  when: manual
  script:
    - export INIT_WEIGHT=50 && export NEW_WEIGHT=50
    - eksctl utils write-kubeconfig --kubeconfig kubeconfig-$CLUSTER_NAME.yaml --cluster $CLUSTER_NAME --region $AWS_DEFAULT_REGION
    - export KUBECONFIG=${PWD}/kubeconfig-$CLUSTER_NAME.yaml
    - chmod 775 pipeline.sh && ./pipeline.sh deploy


deploy%75:
  stage: deploy
  when: manual
  script:
    - export INIT_WEIGHT=25 && export NEW_WEIGHT=75
    - eksctl utils write-kubeconfig --kubeconfig kubeconfig-$CLUSTER_NAME.yaml --cluster $CLUSTER_NAME --region $AWS_DEFAULT_REGION
    - export KUBECONFIG=${PWD}/kubeconfig-$CLUSTER_NAME.yaml
    - chmod 775 pipeline.sh && ./pipeline.sh deploy


deploy%100:
  stage: deploy
  when: manual
  script:
    - export INIT_WEIGHT=0 && export NEW_WEIGHT=100
    - eksctl utils write-kubeconfig --kubeconfig kubeconfig-$CLUSTER_NAME.yaml --cluster $CLUSTER_NAME --region $AWS_DEFAULT_REGION
    - export KUBECONFIG=${PWD}/kubeconfig-$CLUSTER_NAME.yaml
    - chmod 775 pipeline.sh && ./pipeline.sh deploy


kill_previous:
  stage: kill
  when: manual
  script:
    - eksctl utils write-kubeconfig --kubeconfig kubeconfig-$CLUSTER_NAME.yaml --cluster $CLUSTER_NAME --region $AWS_DEFAULT_REGION
    - export KUBECONFIG=${PWD}/kubeconfig-$CLUSTER_NAME.yaml
    - chmod 775 pipeline.sh && ./pipeline.sh destroy
  • src folder in the repository includes the Flask application, hello world, and Dockerfile.
  • manifests folder includes Kubernetes deployment and App Mesh manifests files.
  • Finally, pipeline.sh script manages all deployment and kill_previous steps.
    • When you deploy application by runing deploy%X stages, pipeline.sh first checks and gets the latest version of your application from DynamoDB, then creates a Kubernetes namespace for appmesh, virtualnode, kubernetes deployment, virtualservice, virtualrouter, and virtual gateway. Then, it changes the weight of user traffic, and distributes to newer and older version of application.
    • If you want the run kill_previous stage in a Gitlab CI, pipeline.sh will run update the virtual router to distribute 100% traffic to newer version, delete the kubernetes deployment and virtual node for previous application version, then update the DynamoDB table with current version.

So, after explaining the repository components as above, you can now work with your pipeline in Gitlab CI/CD. In order to get started, you can create a private Gitlab repo named such as appmesh-simple-app.

Then, you have to export the following environment variables into the CI/CD project settings (Settings > CI/CD > Variables).

CLUSTER_NAME=<EKS_CLUSTER_NAME>
AWS_ACCESS_KEY_ID=<IAM_ACCESS_KEY_ID> 
AWS_SECRET_ACCESS_KEY=<IAM_SECRET_KEY> 
AWS_DEFAULT_REGION=<AWS_DEFAULT_REGION(i.e. us-west-2)>

Note that we are using these credentials as listed above for demo purposes, however, AWS credentials must be passed to Gitlab securely for production use cases.

 

Finally, you can add the Gitlab repo as the origin and push it to the Gitlab repository as shown below:

git remote set-url origin <YOUR_GITLAB_REPOSITORY_URL>
git remote -v
git push origin master

Once you commit and push your changes to the Gitlab repository, it will automatically start the CI/CD pipeline, which will automatically deploy to your EKS cluster.

Below is a sample pipeline for Gitlab repo:

 

 

It automatically publishes the “Dockerized” Flask application to the Amazon ECR registry. Then you can manually run the deploy job and use canary technique to deploy to EKS.

In order to view the ALB traffic endpoint, run the traffic.sh script located in your cloned repository. It will print result of the ALB endpoint. You will need to provide the ALB endpoint ot the script.

 

(⎈ |N/A:default)atalay➜~» ./traffic.sh <ALB END POINT> 
Hello v1
Hello v1
Hello v1
Hello v1
Hello v1

Now you can change the code in src/app/app.y to “Hello v2,” then commit your code locally and push to origin/remote again.

from flask import Flask
app = Flask(__name__)

@app.route("/")
def hello():
    return "Hello v2"

if __name__ == "__main__":
    app.run(host='0.0.0.0', port=8081)

The pipeline will automatically publish the “Dockerized” image. When you click on deploy%50, the App Mesh virtual service will redirect half of the traffic to v1 VirtualNode, and the other half to v2 VirtualNode.

(⎈ |N/A:default)atalay➜~» ./traffic.sh <ALB END POINT> 
Hello v1
Hello v2
Hello v1
Hello v2
Hello v1
Hello v2

Then you can click to deploy%100 to redirect all traffic to v2, and later on click the previous_version to kill v1 of the application.

Conclusion

In this blog, we used GitLab continuous integration and AWS App Mesh to publish and deploy a Flask application to an Amazon EKS cluster. Using GitlabCI and AWS App Mesh, you can integrate your existing containerized applications to do canary deployments, which bring safe deployments and rollbacks in the production environments.

Here are few links that you can check out for more tutorials. You can access example projects about AWS App Mesh on Github. Also, please refer to the AWS App Mesh documentation for more information with the service as well as the App Mesh user guide, which provides sections on getting started, best practices, and troubleshooting. Finally, in order to create better CI/CD pipelines, you can have a look at the official Gitlab-CI documentation.