Sign up
kubernetes

Deploying Rails Apps on Amazon's Elastic Kubernetes Service

Hana Mohan

Last updated on

This post will take the containerized Rails application that we created in the last blog post and deploy it to a Kubernetes cluster. Service providers like DigitalOcean, Google Cloud, and Amazon offer managed Kubernetes, and unless you are feeling adventurous, I highly recommend using them. At MagicBell, we use Amazon's Elastic Kubernetes Service (EKS) and find it quite performant (once you get past all the hoops of IAM etc).

Setting up the cluster and command line tools

AWS can be challenging to setup - you need to worry about IAM users, roles, security groups, and VPCs to get things to work. I won't go into all those details here as there are plenty of resources on the internet. Assuming that you have created a cluster and setup your AWS CLI properly, you need to update your kubeconfig to add that cluster.

aws eks update-kubeconfig --name {CLUSTER_NAME} --region={AWS_REGION}

You should see an output like Added new context arn:aws:eks:us-east-1:xxxxxxxxxxxx:cluster/your-cluster-name to /Users/unamashana/.kube/config. To use the cluster

kubectl config use-context arn:aws:eks:us-east-1:xxxxxxxxxxxx:cluster/your-cluster-name

Now you should be able to successfully run this command and see a list of nodes added to your cluster.

kubectl get nodes

Basics of Kubernetes (K8s)

I am assuming that you understand the basics of K8s. For example,

  • Nodes (machines) run pods.
  • Pods run the containerized applications.
  • We use replicaSets to manage the availability of pods.
  • Typically, we also need to setup some services (like a loadbalancer or a clusterIP) to make our services accessible to the external world.

If you aren't well versed in k8s, checkout The Kubernetes Book. I also found a lot of good ideas in Kubernetes & Rails and highly recommend it.

It would help if you also familiarize yourself with ConfigMaps and Secrets. In the development/test env you might be using a gem like dotenv to load the environment variables. However, in production, it's best to load your configuration and secrets (passwords, tokens) using ConfigMaps and Secrets.

Finally, you might want to use namespaces to host your staging and production env in the same cluster. There are pros and cons to this approach, and it's good to understand them before you make the decision.

Components of our deployment

When we containerized our Rails app using Docker, we created a web and a worker image. We'll create a replicaset for the web server and a replicaset for the worker image to deploy our application. We'll setup a load balancer to send the HTTPS traffic to the web servers and finally, find a way to run the migrations before releasing a new version. We'll assume that we are using RDS and Elasticache for running Postgres and Redis. One of the benefits of using AWS is not to have to worry about hosting everything ourselves.

Web Deployment

Let's go ahead and create a deployment for the web image. This step assumes that you have uploaded the image to Elastic Container Registry, and you are setup to fetch this image from the EKS node. Please notice that you'd either have to manually replace the environment variables before running the deployment command or use a tool like envsubst to do it.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app-web-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: webserver
  template:
    metadata:
      labels:
        app: webserver
    spec:
      containers:
      - name: webserver
        image: $AWS_ECR_ACCOUNT_URL/my-app-web:latest
        imagePullPolicy: Always
        ports:
        - containerPort: 3000
        envFrom:
        - configMapRef:
            name: web-config

deployment.web.yml

The above file deploys the my-app-web image ensuring that there are atleast two replicas running at all times. They are also labeled webserver, and the load balancer can use this label to send them traffic. We use a configMap to load the environment (things like DATABASE_URL) and expose the port 3000. We will need to load secrets too but for the sake of simplicity, let's skip that for now and load our secrets from the configMap too.

To deploy this,

kubectl apply -f deployment.web.yml

If everything goes well, run this command in a few minutes and ensure that your web deployment is up and running!

kubectl get pods
NAME                             READY   STATUS    RESTARTS   AGE
my-app-web-847cd49dd7-cwmwc      1/1     Running   0          118m
my-app-web-847cd49dd7-nkg95      1/1     Running   0          118m

In the very likely case that something goes wrong, you will see an error. One of the most common one is ErrPullImage - an error in pulling an image. In such cases, you can get more information by

kubectl describe pods

If you are using namespaces, you'll have to suffix every command with -n {STAGE}. For example

kubectl describe pods -n staging

Deployment for the worker

Deployment for the worker is very similar. However, since the worker runs in the background (silently, I hope), we don't need to expose an HTTP port. The configuration looks like this

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app-worker-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: worker
  template:
    metadata:
      labels:
        app: worker
    spec:
      containers:
      - name: worker
        image: $AWS_ECR_ACCOUNT_URL/my-app-web:latest
        imagePullPolicy: Always
        envFrom:
        - configMapRef:
            name: web-config

deployment.worker.yml

To deploy this,

kubectl apply -f deployment.web.yml

If you list your pods now, you should also see the worker pods.

Load Balancer

Your webserver is serving traffic on port 3000 but unfortunately, you cannot reach it from the outside world. Let's fix that by provisioning a load balancer and having it proxy the traffic to our pods. To achieve this, we'll create a service:

apiVersion: v1
kind: Service
metadata:
  name: my-app-load-balancer
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-west-1:{AWS_ACCOUNT_ID}:certificate/{HTTPS_CERTIFICATE_ID}
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
  type: LoadBalancer
  selector:
    app: webserver
  ports:
    - protocol: TCP
      port: 443
      targetPort: 3000
      name: https%                                                                                                                                                                                                          

Once you apply this configuration file to your cluster, it will create a load balancer (provision one) and route all https traffic to port 3000 on pods matching the label webserver. If you are hosting your domain using Route53, you can create wildcard HTTPS certificates. Once you have the Certificate ID, you can supply it to the loadbalancer using Kubernetes annotations for AWS load balancer.

To get the details of the load balancer, run

kubectl describe service

It might take a few minutes for the load balancer to be available. Once it is, you can add an entry to the Route53 system to send traffic to this load balancer.

Run migrations with jobs

To run one-off (or recurring) jobs, we can use K8s jobs spec. I must warn you - they aren't as easy to use in practice as the other components. If the job errors out, there is a lot of mess to clean up. Nevertheless, let's give it a shot:

apiVersion: batch/v1
kind: Job
metadata:
  name: db-migration
spec:
  template:
    spec:
      containers:
      - name: db-migration
        image: {AWS_ECR_ACCOUNT_URL}/my-app-base:latest
        command: ["rake",  "db:migrate"]
        imagePullPolicy: Always
        envFrom:
        - configMapRef:
            name: web-config
      restartPolicy: Never
  backoffLimit: 4                                                                                                                                                                                                        

migration.yml

The job spec is quite similar to other specs but accepts a command attribute that lets you run a custom command. Also, in this configuration file, we are using the base image we created in the last blog post. We could have as easily used the web server image but the idea of starting a web server to run migrations just felt icky. Our base image only runs a bash shell and leaves it at that.

kubectl apply -f migration.yml

This should migrate your database. However, there are a few things to consider:

  • If this job fails, it will recreate a few more pods and retry (and fail).
  • You won't be able to re-run this job after fixing the image/config file because the old job name would still be in use. You'll have to delete the job first before you do. In fact, you will have to do that even if the job succeeds.

Next Steps: Redeployments

Before we fix that, let's review what we have done so far. In theory, now, you have a running app. Congratulations!

Unfortunately, you cannot easily redeploy it or make configuration changes and have them automatically picked up again. Also, migrations are a bit icky. There is no easy way to rollback if you make a bad deployment.

Unlike Capistrano, you cannot update the code and restart your application in K8s. Every time you want to deploy, you need to build a new image and push it up. If there are no config changes, you can simply delete the old pods and have the new pods spin up with the new images (thanks to imagePullPolicy: Always). To accomplish this,

kubectl delete pods --all

Since we are using a replicaSet to manage our deployments, K8s will make sure the new pods are up before killing the old ones. This means you get zero downtime deployments out of the box.

Another trick to achieving redeploys is to change the image directive in the k8s config files to

        image: {AWS_ECR_ACCOUNT_URL}/my-app-base:{SHA1}

Instead of pulling the latest, we pull tagged images, each time using the SHA1 of the latest build.

This works for code but if our configMap changes, we need to delete it, recreate it, and then delete all the pods (and have them recreated with the new configMap). This is messy and error-prone. Also, there is no way to rollback both the code and config changes if something goes wrong.

To achieve all this, we are going to use Helm. Just so you are clear, we use Docker to manage the dependencies for Rails, K8s for deploying Docker, and Helm for managing K8s :)

However, I'll cover Helm in the next blog post and CircleCI to build a CI/CD process.

Related articles: