Category Archives: Kubernetes

Pod disruption budgets in Kubernetes

The PodDisruptionBudget kind (or PDB) is used to configure the availability of voluntary disruptions.

To give a little more detail, this is a policy that, for example, limits how many pods can be disrupted at once, this ensure a minimum number of pods remain available during operations such as node upgrade, autoscaling or voluntary evictions. This is a way to ensure serving capacity remains at a given level during upgrades etc.

Here’s an example yaml file for this

apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: echo-pdb
  namespace: dev
spec:
  minAvailable: 1  # At least one pod must be available OR use maxUnavailable: 1 for maximum which can be unavailable
  selector:
    matchLabels:
      app: echo

In this example we use the “minAvailable”, you could use “maxUnavailable” but not both.

Kubernetes Jobs (one off tasks)

In the last post I create a simple application to be used within a schedule, i.e. a CronJob in Kubernetes.

We can also create one off tasks (or a Job) which might be used to migrations or some batch processing. We’ve going to use everything from the previous post to build, containerize and push our imager to a container registry. The only change is to use the supplied job.yaml file, listed below

apiVersion: batch/v1
kind: Job
metadata:
  name: one-time-job
  namespace: dev
spec:
  template:
    spec:
      containers:
      - name: one-time-jobb
        image: putridparrotreg/putridparrot/crj:1.0.0
      restartPolicy: Never

Running the following “kubectl get jobs -n dev” results in something like this

NAME           STATUS     COMPLETIONS   DURATION   AGE
one-time-job   Complete   1/1           5s         41s

and if we get the pods using “kubectl get jobs -n dev” we get something like

NAME           STATUS     COMPLETIONS   DURATION   AGE
one-time-job   Complete   1/1           5s         83s

and if we check the pods with “kubectl get pods -n dev” we’ll see something like this

NAME                 READY   STATUS      RESTARTS   AGE
one-time-job-h5dvf   0/1     Completed   0          3m23s

and ofcourse we can see the logs of this run via “kubectl logs one-time-job-h5dvf -n dev” and we get our application output, i.e. the date/time it was run

Current date and time: 2025-08-17 15:39:53.114962479 +00:00

You’ll note that the pod remained in the cluster, this allowed us to view the logs etc. and it’s down to the developer/devops to delete the job and pod unless…

We can actually set up automate deletion of the pod using the “ttlSecondsAfterFinished” option in the yaml file, i.e.

apiVersion: batch/v1
kind: Job
metadata:
  name: one-time-job
  namespace: dev
spec:
  ttlSecondsAfterFinished: 300  # Deletes Job and its Pods 5 minutes after completion
  template:
    spec:
      containers:
      - name: one-time-jobb
        image: putridparrotreg/putridparrot/crj:1.0.0
      restartPolicy: Never

We also have the option of “activeDeadlineSeconds”, this does not delete or clean up anything but it can be used in the “spec:” section like “ttlSecondsAfterFinished” to denote that the job will be killed off it not finished. So for example

apiVersion: batch/v1
kind: Job
metadata:
  name: one-time-job
  namespace: dev
spec:
  ttlSecondsAfterFinished: 300  # Deletes Job and its Pods 5 minutes after completion
  activeDeadlineSeconds: 600 # Job will be killed even if not finished, in 10 minutes
  template:
    spec:
      containers:
      - name: one-time-jobb
        image: putridparrotreg/putridparrot/crj:1.0.0
      restartPolicy: Never

Kubernetes cronjobs

You know the scenario, you’re wanting to run jobs either at certain points in a day or throughout the data every N timespans (i.e. every 5 mins).

Kubernetes has you covered, there’s a specific “kind” of job for this, as you guessed from the title, the CronJob.

An example app.

Let’s assume you created yourself a job – I’m going to create a simple job that just outputs the date/time at the scheduled time. I’ve written this in Rust but to be honest it’s simple enough that this could be any language. Here’s the Cargo.toml

The application is just a standard console application named crj (for cronjob or cron rust job, I really didn’t think about it :)).

[package]
name = "crj"
version = "0.1.0"
edition = "2024"

[dependencies]
chrono = "0.4"

Here’s the code

use chrono::Local;

fn main() {
    let now = Local::now();
    println!("Current date and time: {}", now);
}

See I told you it was simple.

Docker

For completeness, here’s the Dockerfile and the steps to get things built, tagged and pushed

FROM rust:1.89.0-slim AS builder

WORKDIR /app
COPY . .

RUN cargo build --release

FROM debian:bookworm-slim

RUN apt-get update && apt-get install -y ca-certificates && \
    rm -rf /var/lib/apt/lists/*

COPY --from=builder /app/target/release /usr/local/bin/crj

RUN chmod +x /usr/local/bin/crj

ENTRYPOINT ["/usr/local/bin/crj/crj"]

Next up we need to build the image using (remember to use the image you created as well as the correct name for your container registry)

docker build -t putridparrot/crj:1.0.0 .

then tag it using

docker tag putridparrot/crj:1.0.0 putridparrotreg/putridparrot/crj:1.0.0

Finally we’ll push it to our container registry using

docker push putridparrotreg/putridparrot/crj:1.0.0

Kubernetes CronJob

All pretty standard stuff and to be honest the next bit is simple enough. We need to create a kubernetes yaml file (or helm charts). Here’s my cronjob.yaml

apiVersion: batch/v1
kind: CronJob
metadata:
  name: scheduled-job
  namespace: dev
spec:
  schedule: "*/5 * * * *" # every 5 minutes
  jobTemplate:
    spec:
      template:
        spec:
          containers:
            - name: scheduled-job
              image:  putridparrotreg/putridparrot/crj:1.0.0
          restartPolicy: Never

My cronjob has the name scheduled-job (I know, not very imaginative). We apply this file to Kubernetes as usual i.e.

kubectl apply -f .\cronjob.yaml

Did it work?

We’ll ofcourse want to take a look at what happened after this CronJob was set up in Kubernetes. We can simply use the following. You can set the namespace used, such as dev in my case.

kubectl get cronjobs --all-namespaces -w

you’ll see something like this

NAMESPACE   NAME            SCHEDULE      TIMEZONE   SUSPEND   ACTIVE   LAST SCHEDULE   AGE
dev         scheduled-job   */5 * * * *   <none>     False     0        <none>          9s
dev         scheduled-job   */5 * * * *   <none>     False     1        0s              16s
dev         scheduled-job   */5 * * * *   <none>     False     0        13s             29s
dev         scheduled-job   */5 * * * *   <none>     False     1        0s              5m16s

In my case the job starts (ACTIVE) and then completes and shuts down. Then 5 minutes later it starts again as expected with this cron schedule.

On the pods side you can run

kubectl get pods -n dev -w

Now what you’ll see is something like this

NAME                           READY   STATUS              RESTARTS   AGE
scheduled-job-29257380-5w4rg   0/1     Completed           0          51s
scheduled-job-29257385-qgml2   0/1     Pending             0          0s
scheduled-job-29257385-qgml2   0/1     Pending             0          0s
scheduled-job-29257385-qgml2   0/1     ContainerCreating   0          0s
scheduled-job-29257385-qgml2   1/1     Running             0          2s
scheduled-job-29257385-qgml2   0/1     Completed           0          3s
scheduled-job-29257385-qgml2   0/1     Completed           0          5s
scheduled-job-29257385-qgml2   0/1     Completed           0          5s
scheduled-job-29257390-2x98r   0/1     Pending             0          0s
scheduled-job-29257390-2x98r   0/1     Pending             0          0s
scheduled-job-29257390-2x98r   0/1     ContainerCreating   0          0s
scheduled-job-29257390-2x98r   1/1     Running             0          2s

Notice that the pod is created and goes into a “Pending” state. Then “ContainerCreating” before “Running” and finally “Completed”, but the next run of the cronjob creates a new pod name. Therefore, if you’re trying to log the pods i.e. kubectl logs scheduled-job-29257380-5w4rg -n dev – then you’ll get something like the below, but you cannot -f (follow) the logs as the next time the job runs it creates a new pod.

Current date and time: 2025-08-17 15:00:09.294317303 +00:00

Configuring your DNS through to your Azure Kubernetes Cluster

Note: I’m going to have to list the steps I think I took to buy a domain name on the Azure Portal, as I didn’t note all the steps down at the time – so please double check things when creating your own.

You can create you domain wherever you like, I happened to have decided to create mine via Azure.

  • Go to the Azure portal
  • Search for DNS Zones and click the Create button
  • Supply your subscription and resource group
  • Select your domain name
  • Click the Review + Create then create to DNS Zone

To set up the DNS zone (I cannot recall if this was part of the above or a separate step), run

az network dns zone create \
--resource-group {RESOURCE_GROUP} \
--name {DOMAN_NAME}

I’m going to assume you have Kubernetes installed.

We need a way to get from the outside world into our Kubernetes cluster so we’ll create and ingress controller using

helm install ingress-nginx ingress-nginx/ingress-nginx \
--create-namespace --namespace ingress-nginx \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \
--set controller.service.externalTrafficPolicy=Local

Next we need to update our DNS record to use the EXTERNAL_IP of the ingress controller we’ve just created, so

  • Run the following to get the EXTERNAL_IP
    kubectl get svc ingress-nginx-controller -n ingress-n
    
  • You can go into the DNS record and change the A record (@ Type and any other subdomains you’ve added) to use the EXTERNAL_IP address or use
    az network dns record-set a add-record --resource-group {RESOURCE_GROUP} \ 
    --zone-name {DOMAIN_NAME} --record-set-name --ipv4-address {EXTERNAL_IP}
    

At this point you’ll obviously need to set up your service with it’s own ingress using your domain in the “host” value of the ingress.

A simple web API in various languages and deployable to Kubernetes (Node/Typescript)

Continuing this short series of writing a simple echo service web API along with the docker and k8s requirements, we’re now going to turn our attention to a Node implementation.

Add the file app.ts to the /src folder with the following content

import express, { Request, Response } from 'express';
import bodyParser from 'body-parser';

const app = express();
const PORT = process.env.PORT || 8080;

app.use(bodyParser.json());


app.get('/echo', (req: Request, res: Response) => {
  const queryParams = req.query;
  res.type('text/plain');
  res.send(`Node Echo: ${queryParams.text}`);
});

app.get('/livez', (_req: Request, res: Response) => {
  res.sendStatus(200);
});

app.get('/readyz', async (_req: Request, res: Response) => {
  try {
    res.sendStatus(200);
  } catch (err) {
    res.status(503).send('Service not ready');
  }
});

app.listen(PORT, () => {
  console.log(`Echo service is live at http://localhost:${PORT}`);
});

Dockerfile

Next up we need to create our Dockerfile

FROM node:24-alpine

WORKDIR /app

COPY package.json package-lock.json* tsconfig.json ./

RUN npm install

COPY src ./src

#RUN npx tsc

EXPOSE 8080

CMD ["npx", "ts-node", "src/app.ts"]

Note: In Linux port 80 might be locked down, hence we use port 8080 by default.

To build this, run

docker build -t putridparrot.echo_service:v1 .

Don’t forget to change the name to your preferred name.

and to test this, run

docker run -p 8080:8080 putridparrot.echo_service:v1

Kubernetes

If all wen well we’ve not tested our application and see it working from a docker image, so now we need to create the deployment etc. for Kubernete’s. Let’s assume you’ve pushed you image to Docker or another container registry such as Azure – I’m call my container registry putridparrotreg.

I’m also not going to use helm at this point as I just want a (relatively) simple yaml file to run from kubectl, so create a deployment.yaml file, we’ll store all the configurations, deployment, service and ingress in this one file jus for simplicity.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: echo
  namespace: dev
  labels:
    app: echo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: echo
  template:
    metadata:
      labels:
        app: echo
    spec:
      containers:
      - name: echo
        image: putridparrotreg/putridparrot.echo_service:v1
        ports:
        - containerPort: 8080
        resources:
          requests:
            memory: "100Mi"
            cpu: "100m"
          limits:
            memory: "200Mi"
            cpu: "200m"
        livenessProbe:
          httpGet:
            path: /livez
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /readyz
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 5

---
apiVersion: v1
kind: Service
metadata:
  name: echo_service
  namespace: dev
  labels:
    app: echo
spec:
  type: ClusterIP
  selector:
    app: echo 
  ports:
  - name: http
    port: 80
    targetPort: 8080
    protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: echo-ingress
  namespace: dev
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: mydomain.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: echo_service
            port:
              number: 80

Don’t forget to change the “host” and image to suit, also this assume you created a namespace “dev” for your app. See Creating a local container registry for information on setting up your own container registry.

A simple web API in various languages and deployable to Kubernetes (Go)

Continuing this short series of writing a simple echo service web API along with the docker and k8s requirements, we’re now going to turn our attention to a Go implementation.

Implementation

I’m using JetBrains GoLand for this project, so I created a project named echo_service.

If it doesn’t exist add the go.mod file or update it to have following

module echo_service

go 1.24

add a main.go file with the following

package main

import (
	"fmt"
	"net/http"
)

func livezHandler(w http.ResponseWriter, r *http.Request) {
	w.WriteHeader(http.StatusOK)
	w.Write([]byte("ok\n"))
}

func readyzHandler(w http.ResponseWriter, r *http.Request) {
	w.WriteHeader(http.StatusOK)
	w.Write([]byte("ok\n"))
}

func echoHandler(w http.ResponseWriter, r *http.Request) {
	text := r.URL.Query().Get("text")
	fmt.Fprintf(w, "Go Echo: %s\n", text)
}

func main() {
	http.HandleFunc("/echo", echoHandler)
	http.HandleFunc("/livez", livezHandler)
	http.HandleFunc("/readyz", readyzHandler)

	fmt.Println("Echo service running on port 8080...")
	err := http.ListenAndServe(":8080", nil)
	if err != nil {
		fmt.Println("Failed to start the service")
		return
	}
}

Dockerfile

Next up we need to create our Dockerfile

FROM golang:1.24-alpine
WORKDIR /app
COPY . .
RUN go build -o echo_service .
CMD ["./echo_service"]

Note: In Linux port 80 might be locked down, hence we use port 8080 by default.

To build this, run

docker build -t putridparrot.echo_service:v1 .

Don’t forget to change the name to your preferred name.

and to test this, run

docker run -p 8080:8080 putridparrot.echo_service:v1

Kubernetes

If all wen well we’ve not tested our application and see it working from a docker image, so now we need to create the deployment etc. for Kubernete’s. Let’s assume you’ve pushed you image to Docker or another container registry such as Azure – I’m call my container registry putridparrotreg.

I’m also not going to use helm at this point as I just want a (relatively) simple yaml file to run from kubectl, so create a deployment.yaml file, we’ll store all the configurations, deployment, service and ingress in this one file jus for simplicity.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: echo
  namespace: dev
  labels:
    app: echo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: echo
  template:
    metadata:
      labels:
        app: echo
    spec:
      containers:
      - name: echo
        image: putridparrotreg/putridparrot.echo_service:v1
        ports:
        - containerPort: 8080
        resources:
          requests:
            memory: "100Mi"
            cpu: "100m"
          limits:
            memory: "200Mi"
            cpu: "200m"
        livenessProbe:
          httpGet:
            path: /livez
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /readyz
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 5

---
apiVersion: v1
kind: Service
metadata:
  name: echo_service
  namespace: dev
  labels:
    app: echo
spec:
  type: ClusterIP
  selector:
    app: echo 
  ports:
  - name: http
    port: 80
    targetPort: 8080
    protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: echo-ingress
  namespace: dev
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: mydomain.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: echo_service
            port:
              number: 80

Don’t forget to change the “host” and image to suit, also this assume you created a namespace “dev” for your app. See Creating a local container registry for information on setting up your own container registry.

A simple web API in various languages and deployable to Kubernetes (Python)

Continuing this short series of writing a simple echo service web API along with the docker and k8s requirements, we’re now going to turn our attention to a Python implementation.

Implementation

I’m using JetBrains PyCharm for this project, so I created a project named echo_service.

Next, add the file app.py with the following code

from flask import Flask, request

app = Flask(__name__)

@app.route('/echo')
def echo():
    text = request.args.get('text', '')
    return f"Python Echo: {text}", 200

@app.route('/livez')
def livez():
    return "OK", 200

@app.route('/readyz')
def readyz():
    return "Ready", 200

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=8080)

Add a requirements.txt file with the following

flask
gunicorn

Don’t forget to install the packages via the IDE.

Dockerfile

Next up we need to create our Dockerfile

# Use a lightweight Python base
FROM python:3.11-slim

WORKDIR /app

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY app.py .

CMD ["gunicorn", "-w", "2", "-b", "0.0.0.0:8080", "app:app"]

Note: we’ll be using gunicorn instead of the development server.

Note: In Linux port 80 might be locked down, hence we use port 8080 by default.

To build this, run

docker build -t putridparrot.echo_service:v1 .

Don’t forget to change the name to your preferred name.

and to test this, run

docker run -p 8080:8080 putridparrot.echo_service:v1

Kubernetes

If all wen well we’ve not tested our application and see it working from a docker image, so now we need to create the deployment etc. for Kubernete’s. Let’s assume you’ve pushed you image to Docker or another container registry such as Azure – I’m call my container registry putridparrotreg.

I’m also not going to use helm at this point as I just want a (relatively) simple yaml file to run from kubectl, so create a deployment.yaml file, we’ll store all the configurations, deployment, service and ingress in this one file jus for simplicity.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: echo
  namespace: dev
  labels:
    app: echo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: echo
  template:
    metadata:
      labels:
        app: echo
    spec:
      containers:
      - name: echo
        image: putridparrotreg/putridparrot.echo_service:v1
        ports:
        - containerPort: 8080
        resources:
          requests:
            memory: "100Mi"
            cpu: "100m"
          limits:
            memory: "200Mi"
            cpu: "200m"
        livenessProbe:
          httpGet:
            path: /livez
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /readyz
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 5

---
apiVersion: v1
kind: Service
metadata:
  name: echo_service
  namespace: dev
  labels:
    app: echo
spec:
  type: ClusterIP
  selector:
    app: echo 
  ports:
  - name: http
    port: 80
    targetPort: 8080
    protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: echo-ingress
  namespace: dev
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: mydomain.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: echo_service
            port:
              number: 80

Don’t forget to change the “host” and image to suit, also this assume you created a namespace “dev” for your app. See Creating a local container registry for information on setting up your own container registry.

A simple web API in various languages and deployable to Kubernetes (Rust)

Continuing this short series of writing a simple echo service web API along with the docker and k8s requirements, we’re now going to turn our attention to a Rust implementation.

Implementation

I’m using JetBrains RustRover for this project, so I created a project named echo_service.

Next, add the following to the dependencies of Cargo.toml

axum = "0.7"
tokio = { version = "1", features = ["full"] }
serde = { version = "1", features = ["derive"] }

and now the main.rs can be replaced with

use axum::{
    routing::get,
    extract::Query,
    http::StatusCode,
    response::IntoResponse,
    Router,
};
use tokio::net::TcpListener;
use axum::serve;
use std::net::SocketAddr;
use serde::Deserialize;

#[derive(Deserialize)]
struct EchoParams {
    text: Option<String>,
}

async fn echo(Query(params): Query<EchoParams>) -> String {
    format!("Rust Echo: {}", params.text.unwrap_or_default())
}

async fn livez() -> impl IntoResponse {
    (StatusCode::OK, "OK")
}

async fn readyz() -> impl IntoResponse {
    (StatusCode::OK, "Ready")
}

#[tokio::main]
async fn main() {
    let app = Router::new()
        .route("/echo", get(echo))
        .route("/livez", get(livez))
        .route("/readyz", get(readyz));

    let addr = SocketAddr::from(([0, 0, 0, 0], 8080));
    println!("Running on http://{}", addr);

    let listener = TcpListener::bind(addr).await.unwrap();
    serve(listener, app).await.unwrap();

}

Dockerfile

Next up we need to create our Dockerfile

FROM rust:1.72-slim AS builder

WORKDIR /app
COPY . .

RUN cargo build --release

FROM debian:bookworm-slim

RUN apt-get update && apt-get install -y ca-certificates && \
    rm -rf /var/lib/apt/lists/*

COPY --from=builder /app/target/release /usr/local/bin/echo_service

RUN chmod +x /usr/local/bin/echo_service

EXPOSE 8080

ENTRYPOINT ["/usr/local/bin/echo_service/echo_service"]

Note: In Linux port 80 might be locked down, hence we use port 8080 by default.

To build this, run

docker build -t putridparrot.echo_service:v1 .

Don’t forget to change the name to your preferred name.

and to test this, run

docker run -p 8080:8080 putridparrot.echo_service:v1

Kubernetes

If all wen well we’ve not tested our application and see it working from a docker image, so now we need to create the deployment etc. for Kubernete’s. Let’s assume you’ve pushed you image to Docker or another container registry such as Azure – I’m call my container registry putridparrotreg.

I’m also not going to use helm at this point as I just want a (relatively) simple yaml file to run from kubectl, so create a deployment.yaml file, we’ll store all the configurations, deployment, service and ingress in this one file jus for simplicity.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: echo
  namespace: dev
  labels:
    app: echo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: echo
  template:
    metadata:
      labels:
        app: echo
    spec:
      containers:
      - name: echo
        image: putridparrotreg/putridparrot.echo_service:v1
        ports:
        - containerPort: 8080
        resources:
          requests:
            memory: "100Mi"
            cpu: "100m"
          limits:
            memory: "200Mi"
            cpu: "200m"
        livenessProbe:
          httpGet:
            path: /livez
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /readyz
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 5

---
apiVersion: v1
kind: Service
metadata:
  name: echo_service
  namespace: dev
  labels:
    app: echo
spec:
  type: ClusterIP
  selector:
    app: echo 
  ports:
  - name: http
    port: 80
    targetPort: 8080
    protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: echo-ingress
  namespace: dev
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: mydomain.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: echo_service
            port:
              number: 80

Don’t forget to change the “host” and image to suit, also this assume you created a namespace “dev” for your app. See Creating a local container registry for information on setting up your own container registry.

A simple web API in various languages and deployable to Kubernetes (C#)

Introduction

I’m always interested in how different programming languages and their libs/frameworks tackle the same problem. Recently the topic of writing web API’s in whatever language we wanted came up and so I thought, well let’s try to do just that.

The service is maybe too simple for a really good explanation of the frameworks and language features of the languages I’m going to use, but at the same time, I wanted to just do the bare minimum to have something working but enough.

The service is a “echo” service, it will have an endpoint that simply passes back what’s sent to it (prefixed with some text) and also supply livez and readyz as I want to also create a Dockerfile and the associated k8s yaml files to deploy the service.

The healthz endpoint is deprecated as of k8s v1.16, so we’ll leave that one out.

It should be noted that there are (in some cases) other frameworks that can be used and optimisations, my interest is solely to get some basic Web API deployed to k8s that works, so you may have preferences for other ways to do this.

C# Minimal API

Let’s start with an ASP.NET core, minimal API, web API…

  • Create an ASP.NET core Web API project in Visual Studio
  • Enable container support and I’ve chosen Linux OS
  • Ensure Container build type is set to Dockerfile
  • I’m using minimal API so ensure “User Controllers” is not checked

Now let’s just replace Program.cs with the following

using Microsoft.AspNetCore.Diagnostics.HealthChecks;

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
builder.Services.AddHealthChecks();

var app = builder.Build();

if (app.Environment.IsDevelopment())
{
    app.UseSwagger();
    app.UseSwaggerUI();
}

app.UseHttpsRedirection();

app.MapGet("/echo", (string text) =>
    {
        app.Logger.LogInformation($"C# Echo: {text}");
        return $"Echo: {text}";
    })
    .WithName("Echo")
    .WithOpenApi();

app.MapHealthChecks("/livez");
app.MapHealthChecks("/readyz", new HealthCheckOptions
{
    Predicate = _ => true
});

app.Run();

Docker

Next we need to copy the Dockerfile from the csproj folder to the sln folder – for completeness here’s the Dockerfile generated by Visual Studio (comments removed)

FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS base
USER $APP_UID
WORKDIR /app
EXPOSE 8080
EXPOSE 8081

FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build
ARG BUILD_CONFIGURATION=Release
WORKDIR /src
COPY ["EchoService/EchoService.csproj", "EchoService/"]
RUN dotnet restore "./EchoService/EchoService.csproj"
COPY . .
WORKDIR "/src/EchoService"
RUN dotnet build "./EchoService.csproj" -c $BUILD_CONFIGURATION -o /app/build

FROM build AS publish
ARG BUILD_CONFIGURATION=Release
RUN dotnet publish "./EchoService.csproj" -c $BUILD_CONFIGURATION -o /app/publish /p:UseAppHost=false

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "EchoService.dll"]

Note: In Linux port 80 might be locked down, hence we use port 8080 by default.

To build this, run

docker build -t putridparrot.echo-service:v1 .

Don’t forget to change the name to your preferred name.

and to test this, run

docker run -p 8080:8080 putridparrot.echo-service:v1

and we can text using “http://localhost:8080/echo?text=Putridparrot”

Kubernetes

If all wen well we’ve not tested our application and see it working from a docker image, so now we need to create the deployment etc. for Kubernete’s. Let’s assume you’ve pushed you image to Docker or another container registry such as Azure – I’m call my container registry putridparrotreg.

I’m also not going to use helm at this point as I just want a (relatively) simple yaml file to run from kubectl, so create a deployment.yaml file, we’ll store all the configurations, deployment, service and ingress in this one file jus for simplicity.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: echo
  namespace: dev
  labels:
    app: echo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: echo
  template:
    metadata:
      labels:
        app: echo
    spec:
      containers:
      - name: echo
        image: putridparrotreg/putridparrot.echo-service:v1
        ports:
        - containerPort: 8080
        resources:
          requests:
            memory: "100Mi"
            cpu: "100m"
          limits:
            memory: "200Mi"
            cpu: "200m"
        livenessProbe:
          httpGet:
            path: /livez
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /readyz
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 5

---
apiVersion: v1
kind: Service
metadata:
  name: echo-service
  namespace: dev
  labels:
    app: echo
spec:
  type: ClusterIP
  selector:
    app: echo 
  ports:
  - name: http
    port: 80
    targetPort: 8080
    protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: echo-ingress
  namespace: dev
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: mydomain.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: echo-service
            port:
              number: 80

Don’t forget to change the “host” and image to suit, also this assume you created a namespace “dev” for your app. See Creating a local container registry for information on setting up your own container registry.

Adding Nginx Ingress controller to your Kubernetes cluster

You’ve created your Kubernetes cluster, added a service, therefore you’ve set up deployments, services and ingress but now you want to expose the cluster to the outside world.

We need to add am ingress controller such as nginx.

  • helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
    
  • helm repo update
    
  • helm install ingress-nginx ingress-nginx/ingress-nginx \
      --create-namespace \
      --namespace ingress-nginx \
      --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \
      --set controller.service.externalTrafficPolicy=Local
    

Note: In my case my namespace is ingress-nginx, but you can set to what you prefer.

Now I should say, originally I installed using

helm install ingress-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --create-namespace

and I seemed to not be able to reach this from the web, but including here just for reference.

To get your EXTERNAL-IP, i.e. the one exposed to the web, use the following (replace the -n with the namespace you used).

kubectl get svc ingress-nginx-controller -n ingress-nginx

If you’re using some script to get the IP, you can extract just that using

kubectl get svc ingress-nginx-controller -n ingress-nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}'

Now my IP is not static, so upon redeploying the controller it’s possible this might change, so be aware of this. Ofcourse to get around it you could create a static IP with Azure (at a nominal cost).

Still not accessing your services, getting 404’s with Nginx web page displayed ?

Remember that in your ingress (i.e. the services ingress), you might have something similar to below.

Here, we set the host name, hence in this case the service will NOT be accessed via the IP itself, it needs to be used via the domain name so it matches against the ingress for the service.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: hello-ingress
  namespace: development
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: mydomain.com  # Replace with your actual domain
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: hello-service
            port:
              number: 80