Files
skubelb/README.md

3.8 KiB

A simple Kubernetes load balancer

Configures nginx to forward connections to your node IPs. Services should be declared as NodePort, which means that they open a port on all nodes. When the request lands on any node, it is forwarded to the correct pod via the network mesh kubernetes is using. In theory, there is one a hop penalty.

But lets be honest. You're running with a single LB, probably a GCE free tier N1 VM. That extra hop doesn't matter.

Config

Configure nginx to do what you want, test it. Use any Node IP for your testing. This will become the 'template_dir' in the argument to the LB.

Move that directory to somewhere new, i.e. /etc/nginx-template/. Make a symlink from that new directory to the old one (i.e., ln -s /etc/nginx-template /etc/nginx/).

Make a workspace directory for this tool; it will write configs to this folder before updating the symlink you created above. It needs to be persistent so on server reboot the service starts ok (i.e., mkdir /var/skubelb/).

Make sure the user running the tool has read access to the template folder, read-write access to the workspace folder and config symlink.

Run the server with a command like:

skubelb --needle some_node_ip \
    --workspace_dir /var/skubelb \
    --config_symlink /etc/nginx \
    --template_dir /etc/nginx-template
    --listen 0.0.0.0:8080

Replacing some_node_ip with the node IP you used during the initial setup.

Next, configure the Kubernetes nodes to POST http://loadbalancer:8080/register when they started, and DELETE http://loadbalancer:8080/register when they shutdown.

Running as a system service

Add the systemd config to /etc/systemd/system/skubelb.service:

[Unit]
Description=Simple Kubernetes Load Balancer
After=network.target
StartLimitIntervalSec=0

[Service]
Type=simple
Restart=always
RestartSec=1
User=skubelb
ExecStart=/usr/local/bin/skubelb --needle some_node_ip \
    --workspace_dir /var/skubelb \
    --config_symlink /etc/nginx \
    --template_dir /etc/nginx-template
    --listen 0.0.0.0:8080
    --reload-cmd '/usr/bin/sudo systemctl reload nginx'

[Install]
WantedBy=multi-user.target

Sample Kubernets configuration

Deploy this daemon set to your cluster, replacing lb_address with the address of your load balancer.

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: skubelb
  namespace: skubelb
  labels:
    k8s-app: skubelb
spec:
  selector:
    matchLabels:
      name: skubelb
  template:
    metadata:
      labels:
        name: skubelb
    spec:
      tolerations:
      # these tolerations are to have the daemonset runnable on control plane nodes
      # remove them if your control plane nodes should not run pods
      - key: node-role.kubernetes.io/control-plane
        operator: Exists
        effect: NoSchedule
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      containers:
      - name: skubelb
        image: alpine/curl:latest
        command: ['sh', '-c', 'echo "Wait for heat death of universe" && sleep 999999d']
        lifecycle:
          postStart:
            exec:
              command: ['curl', '-X', 'POST', '34.56.7.198:8888/register']
          preStart:
            exec:
              command: ['curl', '-X', 'POST', '34.56.7.198:8888/register']
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 10m
            memory: 100Mi
      terminationGracePeriodSeconds: 30

NOTE: you should need to make an entry in the firewall to allow this request through. It is very important that the firewall entry has a source filter; it should only be allowed from the Kubernetes cluster. Nginx will forward traffic to any host that registers, and this could easily become a MitM vulnerability.