Skip to main content

Vinnie's Single Node PaaS


So my Docker image and container collection has grow over the past years and I see no sign of it stopping. From adding random services for integrations to multi container services for development, staging, and deployment, to throw away containers, to scheduled task automation .... its time that I start to consider setting up my own Platform as a Service (PaaS).

Platform As A Service

My primary goal is to get my services/containers into a more managed and stable state. With the right setup I believe I'm no where close to maximizing my ability to grow my container usage. My requirements are different than a lot of the documentation goes through:

  • I have a single (x86) hardware system with normal residential resources (48-64GB Mem, a few TB of disk).
  • I use a single TLS termination gateway (nginx) because I don't have the time or interest in running a FreeIPA or other Identity Management System. Managing certificates in some overly complex system buys me nothing at this point.

Base Operating System

I'm setting up my PaaS as a non-user system that I should rarely login to for any reason. All access should be via kubectl or web front-ends. For this reason, I'm going with the Alpine Linux distribution because of its light usage of memory and disk. My initial VM setup is 4GB memory and 32 GB disk (using LVM). I've downloaded alpine-virt-3.17 because of its specialization in installation as a Virtual Machine Guest.

From console:

  • Boot alpine-virt-3.17.0-x86_64 in VM (4GB-mem,64GB-disk)
    • Login with the password-less root account.
    • As root, run setup-alpine (Idealy w/ OpenSSH)
    • _ REBOOT _
  • Install curl, iproute2, sudo, openrc, bash
    • apk -U add curl iproute2 sudo openrc bash
    • sed -i '/^wheel:/ s/$/,user/' /etc/group
  • Install any VPN packages. (I use Tailscale to VPN into my home network.)
    • apk add -U tailscale (may need to uncomment community repo in /etc/apk/repositories)
    • rc-update add tailscale default
    • /etc/init.d/tailscale start
    • tailscale up
    • Login via another device's web browser.

Hostname Setup

For my single node PaaS, I intend to have all services behind a single IP address. I therefore have setup a simple * rule in my DNS provider that will forward all subdomains to my Kubernetes (VPN) IP.

If you don't want to post your address to a public DNS server, you can always set the host name in the /etc/hosts or C:\Windows\system32\drivers\etc\hosts file of the workstation you are working from.

Rancher and K3S

Even though Rancher's Docker container didn't work for me, I've still kept up hope that one day I would run Rancher because I did see the value in it if I could get Ingress to work as intended. Therefore, I decided to use K3S as my Kubernetes Engine. Supposedly its K8S API certified by some group of people and therefore should be compatible with cloud services or other certified engines if I ever decided to migrate. Great!

One of the beautiful things about K3S is that it is a single binary. If you've followed along with my blog here, you'll know that I have an unhealthy obsession with statically built binaries. This also makes installation quite simple for tinkering. You literally can download the binary from GitHub and run k3s server to have a Kubernetes cluster/node/api running on your system.

For a clean install, the K3S documentation recommends that you go the curl/sh route with something like (as root):

curl -sfL | sh -

Once that downloads the k3s static binary from GitHub and initializes the environment, you are technically done. You have a single Node PaaS kubernetes setup ready to go.

The initial install should include a traefik IngressClass so you can setup a Deployment, Service, and Ingress to get fully route-able access to your service. So now on to making the things happen!

Single Node PaaS

As previously mentioned, kubectl api-resources and kubectl explain are your friends when attempting to discover or understand various yaml options. Below is an example Yaml configuration that include the Deployment, Service, and Ingress resources. The intention is to forward all requests from to the test nginx container via the myapp Service and myapp Pod.

The myapp Yaml
apiVersion: apps/v1
kind: Deployment
name: myapp-deployment
app: myapp-deployment
replicas: 1
app: myapp
app: myapp
- name: nginx
image: nginx:latest
- containerPort: 80
apiVersion: v1
kind: Service
name: myapp-svc
namespace: default
app: myapp-svc
type: ClusterIP
app: myapp
- name: http
protocol: TCP
port: 80
targetPort: 80
# Note: This is a Middleware required to do URL rewriting with Traefik
# Note: To learn more, see the Traefik Ingress Documentation online.
kind: Middleware
name: strip-prefix
# No namespace defined
- ^/[^/]+
kind: Ingress
name: myapp-traefik
namespace: default
annotations: 'false' default-strip-prefix@kubernetescrd
ingressClassName: traefik
- host:
- pathType: Prefix
path: /myapp
name: myapp-svc
number: 80

Copy all of this yaml into a myapp.yaml and then apply it with kubectl:

kubectl apply -f myapp.yaml

Presuming everything went according to plan and you've allowed port 80 to get through your local firewalls, you should now be able to open URL with a web browser from your workstation and see the "Welcome to Nginx!" web page.

From here, you should now have a simple baseline to work from where you can start to replace or add aspects to the system like ConfigMaps, Secrets, and PersistentVolumeClaims. If you are feeling even more adventurous, you can start to setup your first StatefulSet (in contrast to a Deployment) to experiment with its behaviors. Hint: Watch how it names the pods compared to Deployment replicas.

If you want to tinker with the actual Yaml that is loaded into Kubernetes, there are several ways to change it:

  • You can always re-run the kubectl apply command you ran before and as long as the resources have the same names, Kubernetes will detect the changes you've made between the new file and what was previously loaded.

  • You can use the kubectl edit <resource-type> <resource-name> to modify the resource on the fly, in the terminal, with what ever editor you have set in EDITOR. (I typically use vim). Once you save and quit the editor, Kubernetes detects and applies the changes to the system. Example:

kubectl edit deployment myapp-deployment


Ok, so we now can deploy a service via Yaml, what else can we do with kubectl. There is a long list of things that you can do with kubectl in the K8S kubectl Cheatsheet. Here are some that I've specifically found useful:

Show all pods, deployments, services, and so forth in all namespaces:

kubectl get all -A

Show all ingress routing rules for all namespaces:

kubectl get ingress -A

Create a deployment without a Yaml definition:

kubectl create deploy <deploy name> --image <container image>
kubectl create deploy nginx --image nginx

Create a service without a Yaml definition:

kubectl expose deploy <deploy name> --port <port>
kubectl expose deploy nginx --port 80

Create a HostPort to access A service or pod via localhost or another bind address:

kubectl port-forward TYPE/NAME \
[--address <[localhost][,ipv4]>] \
[<host-port-N>:<inner-port-N> ...]
kubectl port-forward --address localhost pod/nginx \
8080:80 8443:443

Construct a Pod/Container without a Yaml definition:

kubectl run NAME [options][--env=] [--port=] \
--image=<image> -- [COMMAND] [args...] [options]
kubectl run mysql-client -it --rm --restart=Never \
--image=mysql -- mysql -h mysql -ppassword

Create a literal secret:

kubectl create secret generic db-user-pass \
--from-literal=username=admin \

Create a secret from file paths:

kubectl create secret generic db-user-pass \
--from-file=username=./username.txt \

View secret values in terminal:

kubectl get secret db-user-pass \
-o jsonpath='{.data.password}' | base64 --decode