Kubernetes: The Surprisingly Affordable Platform for Personal Projects

Sep 30, 2018 at 3:07PM
Caleb Doxsey

At the beginning of the year I spent several months deep diving on Kubernetes for a project at work. As an all-inclusive, batteries-included technology for infrastructure management, Kubernetes solves many of the problems you're bound to run into at scale. However popular wisdom would suggest that Kubernetes is an overly complex piece of technology only really suitable for very large clusters of machines; that it carries a large operational burden and that therefore using it for anything less than dozens of machines is overkill.

I think that's probably wrong. Kubernetes makes sense for small projects and you can have your own Kubernetes cluster today for as little as $5 a month.

The Case for Kubernetes

I'll show you how to setup your own Kubernetes cluster in a bit, but first I'll try to make the case for Kubernetes for small projects:

Kubernetes is Robust

At first glance Kubernetes seems like overkill. It seems pretty easy to just provision a VM and configure your web app as a service, so why not do that? Going this route will leave you with several decisions:

  1. How do you deploy your application? Just rsync it to the server?
  2. What about dependencies? If you use python or ruby you're going to have to install them on the server. Do you intend to just run the commands manually?
  3. How are you going to run the application? Will you simply start the binary in the background and nohup it? That's probably not great, so if you go the service route, do you need to learn systemd?
  4. How will you handle running multiple applications with different domain names or http paths? (you'll probably need to setup haproxy or nginx)
  5. Suppose you update your application. How do you rollout the change? Stop the service, deploy the code, restart the service? How do you avoid downtime?
  6. What if you screw up the deployment? Any way to rollback? (Symlink a folder...? this simple script isn't sounding so simple anymore)
  7. Does your application use other services like redis? How do you configure those services?

Kubernetes has solutions for all of these problems. There are certainly other ways to solve them, and perhaps even better ways, but it's one less thing you have to think about and it frees you up to focus on your application instead.

Kubernetes is Reliable

A single server is bound to eventually go down. It's rare, maybe once a year, but when it happens it can be a real headache to get things back in a working state. This is especially true if you've simply manually configured things. Do you remember the commands you ran last time? Do you even know what the server was running? I'm reminded of this famous bash.org quote:

<​erno​> hm. I've lost a machine.. literally _lost_. it responds to ping, it works completely, I just can't figure out where in my apartment it is.

http://bash.org/?5273

This recently happened to me on this very blog. I just needed to update a link and I completely forgot how to deploy my blog, and suddenly my 10 minute fix turned into a whole weekend.

Kubernetes uses a descriptive format, so you always know what things were supposed to be running and the building blocks for your deployment are a lot more clear. Furthermore the control plane handles node failure gracefully and automatically reschedules pods. For a stateless service like a web app you probably don't need to worry about failure anymore.

Kubernetes is No Harder to Learn than the Alternatives

Kubernetes does not follow the Unix model. It doesn't fit in a tool ecosystem. It doesn't do one thing and do it well. It's an all-encompassing solution for many problems and it replaces many of the techniques and tools developers may be accustomed to using.

Kubernetes has its own vocabulary, its own tools, its own paradigm for how to think about servers that's quite different from a traditional unix setup. When you know those systems, a lot of that difference seems arbitrary and overly complex; perhaps even cruel. I think there are good reasons for that complexity, but the point I'm making here is not that Kubernetes is simple and easy to understand; rather it's that knowledge of Kubernetes is sufficient to build and maintain infrastructure.

It's not the case that everyone has that unix sysadmin background. Out of college I spent 5 years working in the Windows ecosystem. I can tell you my first job at a startup using linux was not an easy transition. I wasn't familiar with the commands, and I especially wasn't used to doing nearly everything from the command line. It took me a while to learn how to use the platform, but because of when I learned it (after I had already been doing software development for a while), I distinctly remember how painful it was.

With Kubernetes you can start from scratch. It's entirely possible to provision services in Kubernetes without ever having to SSH into a server. You don't have to learn systemd; you don't have to know what runlevels are or whether it was groupadd or addgroup; you don't have to format a disk, or learn how to use ps, or, God help you, vim. All this stuff is important and useful and none of it is going away. I have a great deal of respect for sysadmins who can code-golf their way around a unix environment. But wouldn't it be great if developers could productively provision infrastructure without having to know all of this?

Is this:

[Unit]
Description=The NGINX HTTP and reverse proxy server
After=syslog.target network.target remote-fs.target nss-lookup.target

[Service]
Type=forking
PIDFile=/run/nginx.pid
ExecStartPre=/usr/sbin/nginx -t
ExecStart=/usr/sbin/nginx
ExecReload=/usr/sbin/nginx -s reload
ExecStop=/bin/kill -s QUIT $MAINPID
PrivateTmp=true

[Install]
WantedBy=multi-user.target

Really any harder than this:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx
spec:
  selector:
    matchLabels:
      run: my-nginx
  replicas: 1
  template:
    metadata:
      labels:
        run: my-nginx
    spec:
      containers:
      - name: my-nginx
        image: nginx
        ports:
        - containerPort: 80

And that's the best case. If you do infrastructure management remotely competently, you're not going to be maintaining servers by hand. You're going to use a tool to do it: ansible, salt, chef, puppet, etc. Sure there's a lot you need to learn to use Kubernetes effectively, but it's not any harder than learning the alternatives.

Kubernetes is Open Source

In an age where serverless has become so popular, Kubernetes is remarkably free of vendor lock-in. There are at least 3 popular, easy to use managed Kubernetes providers (Google, Amazon, Microsoft) that aren't likely to be going anywhere anytime soon. And there are plenty of companies that manage to successfully run their own Kubernetes clusters, with more popping up every day. These days starting with Kubernetes from day one is a no-brainer for most startups.

As an open-source project, it's well-documented, stable and popular with problems being thoroughly stackoverflowable. There are certainly bugs and technical challenges, but rest assured there are folks out there pushing Kubernetes in ways you'll probably never come close too. Their pain is your gain and the technology is only going to improve in the next few years.

Kubernetes Scales

One of the challenges with maintaining infrastructure is that the techniques that make sense for small deployments don't often translate to larger deployments. SCP'ing a binary to a server, killing a process and starting it again is certainly do-able with a single server, but once your maintaining several servers keeping track of them all can be overwhelming, which is why you need tools like chef or puppet to manage infrastructure.

But picking the wrong tool can back you into a corner down the line. Suddenly that master chef server can't handle the load of 1,000 servers, green/blue deployment doesn't seem to fit your model and capistrano tasks take hours to complete. Once you reach a certain size you need to scrap what you've been doing and start over. Wouldn't it be great if you could get off the endless infrastructure hamster-wheel and use a technology that will scale with your needs?

Kubernetes is a lot like a SQL database. SQL is the product of years of tough lessons about the storage of data and how to query it efficiently. You'll probably never need a tenth of the features a decent SQL database provides, and you could probably build something more efficient if you rolled your own custom database, but for the vast majority of situations, a SQL database is not only adequate for your needs, it vastly improves your ability to deliver solutions quickly. SQL schemas and indexing are a whole lot easier to use than custom data structures backed by files -- data structures which will almost certainly grow obsolete as your product grows and changes over time. But a SQL database can probably survive your inevitable refactoring churn.

And so can Kubernetes. Your side project will probably never grow to the size where a technology like Kubernetes is necessary to build it, but it has all the tools you need if you do run into some of those problems, and the skills you'll learn could be invaluable for future projects.

Building Your Own Kubernetes Cluster

So I think it makes sense to use Kubernetes for small projects, but only if it's easy to setup and inexpensive. And as it turns out both of these are true. There are managed Kubernetes providers which can handle the messy details of maintaining the Kubernetes master control plane and recent price wars in cloud infrastructure mean that these services are surprisingly inexpensive.

For this example we're going to go with Google's Kubernetes Engine (GKE), but you could also take a look at Amazon (EKS) or Microsoft (AKS) if Google is not your cup of tea. To build our Kubernetes cluster we are going to need:

In addition, to save cost, we are not going to be using Google's ingress controller. Instead we will run Nginx on each node as a daemon and build a custom operator to sync the worker node external IP addresses with Cloudflare.

Google Setup

First head to console.cloud.google.com and create a project if you don't already have one. You're also going to need to setup a billing account. Then head to the Kubernetes page in the hamburger menu and create a new cluster. You'll want to do the following:

With all those options set you can go ahead and create the cluster. Here's the run-down on the cost:

So we can have a 3 node Kubernetes cluster for the same price as a single Digital Ocean machine.

In addition to setting up GKE we need to add a couple firewall rules to allow the outside world to hit HTTP ports on our nodes. From the hamburger menu, go to VPC Network, Firewall Rules and add rules for TCP ports 80 and 443, with an IP range of 0.0.0.0/0.

Firewall Rules

Local Setup

With a cluster up and running we can now configure it. Install the gcloud tool by following the instructions at cloud.google.com/sdk/docs. Once you have that installed you can set it up by running:

gcloud auth login

You're also going to want to have docker installed and then hook it up to GCR so you can push containers:

gcloud auth configure-docker

You can also install and setup kubectl following the instructions here. Basically:

gcloud components install kubectl
gcloud config set project PROJECT_ID
gcloud config set compute/zone COMPUTE_ZONE
gcloud container clusters get-credentials CLUSTER_NAME

Incidentally it's fantastic that this tooling works in Windows, OSX or Linux. As a sometimes-windows user, this is surprisingly rare.

Building the Web App

You're welcome to use whatever programming language you like for your webapp. Containers abstract away the messy details. We just need to build an http application listening on a port. Personally I prefer building these apps in Go, but for some variety let's try crystal. Create a main.cr file:

# crystal-www-example/main.cr
require "http/server"

Signal::INT.trap do
  exit
end

server = HTTP::Server.new do |context|
  context.response.content_type = "text/plain"
  context.response.print "Hello world from crystal-www-example! The time is #{Time.now}"
end

server.bind_tcp("0.0.0.0", 8080)
puts "Listening on http://0.0.0.0:8080"
server.listen

We will also need a Dockerfile:

# crystal-www-example/Dockerfile
FROM crystallang/crystal:0.26.1 as builder

COPY main.cr main.cr

RUN crystal build -o /bin/crystal-www-example main.cr --release

ENTRYPOINT [ "/bin/crystal-www-example" ]

We can build and test our web app by running:

docker build -t gcr.io/PROJECT_ID/crystal-www-example:latest .
docker run -p 8080:8080 gcr.io/PROJECT_ID/crystal-www-example:latest

And then visit localhost:8080 in your browser. With that working we can push our app to GCR by running:

docker push gcr.io/PROJECT_ID/crystal-www-example:latest

Configuring Kubernetes

My own Kubernetes configuration can be found here.

For this example we are going to create several yaml files to represent our various services and then run kubectl apply to configure them in our cluster. Kubernetes configuration is descriptive and these yaml files tell Kubernetes the state we'd like to see. We leave it up to Kubernetes to get us there. Broadly, here's what we're going to do:

Web App Config

First lets configure our webapp: (make sure to replace PROJECT_ID with your project id)

# kubernetes-config/crystal-www-example.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: crystal-www-example
  labels:
    app: crystal-www-example
spec:
  replicas: 1
  selector:
    matchLabels:
      app: crystal-www-example
  template:
    metadata:
      labels:
        app: crystal-www-example
    spec:
      containers:
      - name: crystal-www-example
        image: gcr.io/PROJECT_ID/crystal-www-example:latest
        ports:
        - containerPort: 8080

---

kind: Service
apiVersion: v1
metadata:
  name: crystal-www-example
spec:
  selector:
    app: crystal-www-example
  ports:
  - protocol: TCP
    port: 8080
    targetPort: 8080

This creates a Deployment, which tells Kubernetes to create a pod with a single container running our docker container, and a service, which we use for service discovery within the cluster. To apply this configuration run (from the kubernetes-config folder):

kubectl apply -f .

We can test that it's running by using:

kubectl get pod
# you should see something like:
# crystal-www-example-698bbb44c5-l9hj9          1/1       Running   0          5m

And we can also create a proxy API so that we can access it:

kubectl proxy

And then visit: http://localhost:8001/api/v1/namespaces/default/services/crystal-www-example/proxy/

NGINX Config

Typically you'd use an ingress controller when working HTTP services in Kubernetes. Unfortunately Google's HTTP load balancer is pretty expensive, so instead we're going to run our own HTTP proxy and configure it manually instead. (which isn't nearly as hard as it sounds)

We will use a Daemon Set and a Config Map for this. A Daemon Set is an application which runs on every node. A Config Map is basically a small file that we can mount in the container and its where we will store the nginx config.

The yaml looks like this:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - image: nginx:1.15.3-alpine
        name: nginx
        ports:
        - name: http
          containerPort: 80
          hostPort: 80
        volumeMounts:
        - name: "config"
          mountPath: "/etc/nginx"
      volumes:
      - name: config
        configMap:
          name: nginx-conf

---

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-conf
data:
  nginx.conf: |
    worker_processes 1;
    error_log /dev/stdout info;

    events {
      worker_connections 10;
    }

    http {
      access_log /dev/stdout;

      server {
        listen 80;
        location / {
          proxy_pass http://crystal-www-example.default.svc.cluster.local:8080;
        }
      }
    }

You can see how we mount the config map's nginx.conf inside the nginx container. We also set two additional fields on the spec, hostNetwork: true so that we can bind the host port and reach nginx from the outside and, dnsPolicy: ClusterFirstWithHostNet so that we can reach services inside the cluster. Otherwise it's fairly standard config.

Apply the changes and you should be able to reach nginx by hitting the public ip of your nodes. You can find that by running:

kubectl get node -o yaml
# look for:
# - address: ...
#   type: ExternalIP

So our web app is now reachable over the internet. All that remains is to give it a nice name.

Hooking up DNS

We need to setup 3 A DNS records for our cluster's nodes:

Records in Cloudflare's UI

And then add a CNAME entry to point to those A records. (ie www.example.com CNAMEs to kubernetes.example.com) We could do this manually, but it'd be better to do it automatically so that if we ever scale up or replace nodes the DNS records will be updated automatically.

I think this also serves as a good example of how you can get Kubernetes to work for you instead of fighting against it. Kubernetes is totally scriptable and has a powerful API so you can fill in gaps with custom components that aren't too hard to write. I built a small Go app for this which can be found here: kubernetes-cloudflare-sync.

I started by building an informer:

factory := informers.NewSharedInformerFactory(client, time.Minute)
lister := factory.Core().V1().Nodes().Lister()
informer := factory.Core().V1().Nodes().Informer()
informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
  AddFunc: func(obj interface{}) {
    resync()
  },
  UpdateFunc: func(oldObj, newObj interface{}) {
    resync()
  },
  DeleteFunc: func(obj interface{}) {
    resync()
  },
})
informer.Run(stop)

This will call my resync function anytime a node is changed. I then sync the IPs using the Cloudflare API library (github.com/cloudflare/cloudflare-go), similar to this:

var ips []string
for _, node := range nodes {
  for _, addr := range node.Status.Addresses {
    if addr.Type == core_v1.NodeExternalIP {
      ips = append(ips, addr.Address)
    }
  }
}
sort.Strings(ips)
for _, ip := range ips {
  api.CreateDNSRecord(zoneID, cloudflare.DNSRecord{
    Type:    "A",
    Name:    options.DNSName,
    Content: ip,
    TTL:     120,
    Proxied: false,
  })
}

Then like our web app, we run this app as a Deployment in Kubernetes:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: kubernetes-cloudflare-sync
  labels:
    app: kubernetes-cloudflare-sync
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kubernetes-cloudflare-sync
  template:
    metadata:
      labels:
        app: kubernetes-cloudflare-sync
    spec:
      serviceAccountName: kubernetes-cloudflare-sync
      containers:
      - name: kubernetes-cloudflare-sync
        image: gcr.io/PROJECT_ID/kubernetes-cloudflare-sync
        args:
        - --dns-name=kubernetes.example.com
        env:
        - name: CF_API_KEY
          valueFrom:
            secretKeyRef:
              name: cloudflare
              key: api-key
        - name: CF_API_EMAIL
          valueFrom:
            secretKeyRef:
              name: cloudflare
              key: email

You will need to create a Kubernetes secret with the cloudflare api key and email address:

kubectl create secret generic cloudflare --from-literal=email='EMAIL' --from-literal=api-key='API_KEY'

And you also need to create the service account (which allows our Deployment access to the Kubernetes API to retrieve nodes). First run (specifically for GKE):

kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user YOUR_EMAIL_ADDRESS_HERE

And then apply:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: kubernetes-cloudflare-sync
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: kubernetes-cloudflare-sync
rules:
- apiGroups: [""]
  resources: ["nodes"]
  verbs: ["list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-cloudflare-sync-viewer
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-cloudflare-sync
subjects:
- kind: ServiceAccount
  name: kubernetes-cloudflare-sync
  namespace: default

RBAC is a bit tedious but hopefully that makes sense. With the config in place and our application running Cloudflare will now be updated anytime one of our nodes change.

A similar application for GCP Cloud DNS is available from Jasper Kuperus: github.com/jasperkuperus/kubernetes-gcp-dns-sync.

Conclusion

Kubernetes is poised to become the dominant way of managing large deployments. Although there are signficant technical challenges to running Kubernetes at scale, and much of the technology is still in flux, Kubernetes adoption has reached a critical mass and we're likely to see rapid improvements in the next few years.

It's my contention that Kubernetes also makes sense for small deployments and is both easy-to-use and inexpensive today. If you've never tried it, now is as good a time as any to give it a go.