Author Archives: purpleblob

Investigating pod resources and usage

Top

kubectl top pod
// Or we could use labels, for example app=ui, app=proxy etc.
kubectl top pod -l 'app in (ui, proxy, api)' -n my-namespace

Check the pods configuration

kubectl describe pod <pod-name> | grep -A5 "Limits"

Prints the five lines after the “Limits” section, for example

Limits:
  cpu:     500m
  memory:  1Gi
Requests:
  cpu:      50m
  memory:   256Mi

Resource Quotas

kubectl get resourcequotas
kubectl get resourcequotas -n my-namesapce
kubectl describe resourcequota {name from above call} -n my-namespace

CPU Throttling

kubectl exec <pod-name> -- cat /sys/fs/cgroup/cpu.stat
kubectl exec <pod-name> -- cat /sys/fs/cgroup/cpu/cpu.stat

For example

usage_usec 177631637
user_usec 89639616
system_usec 87992020
nr_periods 191754
nr_throttled 271
throttled_usec 11291159

– nr_periods – The number of scheduling periods that have occurred.
– nr_throttled – The number of times the process was throttled due to exceeding CPU limits.

Kubernetes port forwarding

We might deploy a something to a pod which doesn’t have an external interface or we just want to debug our deployed pod without going through load balancers etc. Kubernetes allows us to essentially connect and redirect a pod via it’s port, so for example I might have pod name “my-pod” on port 5000 within Kubernetes. I want to access this via curl or a browser or whatever.

Hence we use the following command

kubectl port-forward pod/my-pod 8080:5000

and now to access the application running in this pod using something like this

curl localhost:8080

Rust, postfix “?”

Let’s assume we have a function such as and we have the line highlight ending in a “?” – what’s this doing?

fn get_history() -> Result<Vec<Revision>, String> {
   let revisions: Vec<Revision> = get_revisions()?;
   return Ok(revisions)
}

We can see that the return is a Result – which is an enum that essentially looks like this

enum Result<T, E> {
    Ok(T),
    Err(E),
}

Hence our get_history function can return a Vec<Revision> which might me Ok (for success ofcourse) or an Err (for an error).

Okay, so what’s the highlighted code doing, especially as we only appear to return an Ok?

This is essentially is the same as the following

let revisions = match get_revisions() {
  Ok(val) => val,
  Err(e) => return Err(e)
};

As we can see this is a nice bit of semantic sugar to return an error from the function OR assign the Ok result to the revisions variable.

Pod disruption budgets in Kubernetes

The PodDisruptionBudget kind (or PDB) is used to configure the availability of voluntary disruptions.

To give a little more detail, this is a policy that, for example, limits how many pods can be disrupted at once, this ensure a minimum number of pods remain available during operations such as node upgrade, autoscaling or voluntary evictions. This is a way to ensure serving capacity remains at a given level during upgrades etc.

Here’s an example yaml file for this

apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: echo-pdb
  namespace: dev
spec:
  minAvailable: 1  # At least one pod must be available OR use maxUnavailable: 1 for maximum which can be unavailable
  selector:
    matchLabels:
      app: echo

In this example we use the “minAvailable”, you could use “maxUnavailable” but not both.

Kubernetes Jobs (one off tasks)

In the last post I create a simple application to be used within a schedule, i.e. a CronJob in Kubernetes.

We can also create one off tasks (or a Job) which might be used to migrations or some batch processing. We’ve going to use everything from the previous post to build, containerize and push our imager to a container registry. The only change is to use the supplied job.yaml file, listed below

apiVersion: batch/v1
kind: Job
metadata:
  name: one-time-job
  namespace: dev
spec:
  template:
    spec:
      containers:
      - name: one-time-jobb
        image: putridparrotreg/putridparrot/crj:1.0.0
      restartPolicy: Never

Running the following “kubectl get jobs -n dev” results in something like this

NAME           STATUS     COMPLETIONS   DURATION   AGE
one-time-job   Complete   1/1           5s         41s

and if we get the pods using “kubectl get jobs -n dev” we get something like

NAME           STATUS     COMPLETIONS   DURATION   AGE
one-time-job   Complete   1/1           5s         83s

and if we check the pods with “kubectl get pods -n dev” we’ll see something like this

NAME                 READY   STATUS      RESTARTS   AGE
one-time-job-h5dvf   0/1     Completed   0          3m23s

and ofcourse we can see the logs of this run via “kubectl logs one-time-job-h5dvf -n dev” and we get our application output, i.e. the date/time it was run

Current date and time: 2025-08-17 15:39:53.114962479 +00:00

You’ll note that the pod remained in the cluster, this allowed us to view the logs etc. and it’s down to the developer/devops to delete the job and pod unless…

We can actually set up automate deletion of the pod using the “ttlSecondsAfterFinished” option in the yaml file, i.e.

apiVersion: batch/v1
kind: Job
metadata:
  name: one-time-job
  namespace: dev
spec:
  ttlSecondsAfterFinished: 300  # Deletes Job and its Pods 5 minutes after completion
  template:
    spec:
      containers:
      - name: one-time-jobb
        image: putridparrotreg/putridparrot/crj:1.0.0
      restartPolicy: Never

We also have the option of “activeDeadlineSeconds”, this does not delete or clean up anything but it can be used in the “spec:” section like “ttlSecondsAfterFinished” to denote that the job will be killed off it not finished. So for example

apiVersion: batch/v1
kind: Job
metadata:
  name: one-time-job
  namespace: dev
spec:
  ttlSecondsAfterFinished: 300  # Deletes Job and its Pods 5 minutes after completion
  activeDeadlineSeconds: 600 # Job will be killed even if not finished, in 10 minutes
  template:
    spec:
      containers:
      - name: one-time-jobb
        image: putridparrotreg/putridparrot/crj:1.0.0
      restartPolicy: Never

Kubernetes cronjobs

You know the scenario, you’re wanting to run jobs either at certain points in a day or throughout the data every N timespans (i.e. every 5 mins).

Kubernetes has you covered, there’s a specific “kind” of job for this, as you guessed from the title, the CronJob.

An example app.

Let’s assume you created yourself a job – I’m going to create a simple job that just outputs the date/time at the scheduled time. I’ve written this in Rust but to be honest it’s simple enough that this could be any language. Here’s the Cargo.toml

The application is just a standard console application named crj (for cronjob or cron rust job, I really didn’t think about it :)).

[package]
name = "crj"
version = "0.1.0"
edition = "2024"

[dependencies]
chrono = "0.4"

Here’s the code

use chrono::Local;

fn main() {
    let now = Local::now();
    println!("Current date and time: {}", now);
}

See I told you it was simple.

Docker

For completeness, here’s the Dockerfile and the steps to get things built, tagged and pushed

FROM rust:1.89.0-slim AS builder

WORKDIR /app
COPY . .

RUN cargo build --release

FROM debian:bookworm-slim

RUN apt-get update && apt-get install -y ca-certificates && \
    rm -rf /var/lib/apt/lists/*

COPY --from=builder /app/target/release /usr/local/bin/crj

RUN chmod +x /usr/local/bin/crj

ENTRYPOINT ["/usr/local/bin/crj/crj"]

Next up we need to build the image using (remember to use the image you created as well as the correct name for your container registry)

docker build -t putridparrot/crj:1.0.0 .

then tag it using

docker tag putridparrot/crj:1.0.0 putridparrotreg/putridparrot/crj:1.0.0

Finally we’ll push it to our container registry using

docker push putridparrotreg/putridparrot/crj:1.0.0

Kubernetes CronJob

All pretty standard stuff and to be honest the next bit is simple enough. We need to create a kubernetes yaml file (or helm charts). Here’s my cronjob.yaml

apiVersion: batch/v1
kind: CronJob
metadata:
  name: scheduled-job
  namespace: dev
spec:
  schedule: "*/5 * * * *" # every 5 minutes
  jobTemplate:
    spec:
      template:
        spec:
          containers:
            - name: scheduled-job
              image:  putridparrotreg/putridparrot/crj:1.0.0
          restartPolicy: Never

My cronjob has the name scheduled-job (I know, not very imaginative). We apply this file to Kubernetes as usual i.e.

kubectl apply -f .\cronjob.yaml

Did it work?

We’ll ofcourse want to take a look at what happened after this CronJob was set up in Kubernetes. We can simply use the following. You can set the namespace used, such as dev in my case.

kubectl get cronjobs --all-namespaces -w

you’ll see something like this

NAMESPACE   NAME            SCHEDULE      TIMEZONE   SUSPEND   ACTIVE   LAST SCHEDULE   AGE
dev         scheduled-job   */5 * * * *   <none>     False     0        <none>          9s
dev         scheduled-job   */5 * * * *   <none>     False     1        0s              16s
dev         scheduled-job   */5 * * * *   <none>     False     0        13s             29s
dev         scheduled-job   */5 * * * *   <none>     False     1        0s              5m16s

In my case the job starts (ACTIVE) and then completes and shuts down. Then 5 minutes later it starts again as expected with this cron schedule.

On the pods side you can run

kubectl get pods -n dev -w

Now what you’ll see is something like this

NAME                           READY   STATUS              RESTARTS   AGE
scheduled-job-29257380-5w4rg   0/1     Completed           0          51s
scheduled-job-29257385-qgml2   0/1     Pending             0          0s
scheduled-job-29257385-qgml2   0/1     Pending             0          0s
scheduled-job-29257385-qgml2   0/1     ContainerCreating   0          0s
scheduled-job-29257385-qgml2   1/1     Running             0          2s
scheduled-job-29257385-qgml2   0/1     Completed           0          3s
scheduled-job-29257385-qgml2   0/1     Completed           0          5s
scheduled-job-29257385-qgml2   0/1     Completed           0          5s
scheduled-job-29257390-2x98r   0/1     Pending             0          0s
scheduled-job-29257390-2x98r   0/1     Pending             0          0s
scheduled-job-29257390-2x98r   0/1     ContainerCreating   0          0s
scheduled-job-29257390-2x98r   1/1     Running             0          2s

Notice that the pod is created and goes into a “Pending” state. Then “ContainerCreating” before “Running” and finally “Completed”, but the next run of the cronjob creates a new pod name. Therefore, if you’re trying to log the pods i.e. kubectl logs scheduled-job-29257380-5w4rg -n dev – then you’ll get something like the below, but you cannot -f (follow) the logs as the next time the job runs it creates a new pod.

Current date and time: 2025-08-17 15:00:09.294317303 +00:00

Closures in Rust

A “regular” closure within Rust uses the following syntax

let name = String::from("PutridParrot");
let hello = || println!("Hello {}", name);

In this simple example, the name is captured within the closure, which is the function

|| println!("Hello {}", name);

The name variable remains usable after the closure, however there’s another type of closure is the Moving Closure which uses the move keyword i.e.

let name = String::from("PutridParrot");
let hello = move || println!("Hello {}", name);

The difference here is the the name variable is no longer usable after the closure. Essentially the closure takes ownership of all enclosed variables.

The main use of move closures is within threading, so the thread takes ownership of it’s data. Async blocks often require owned values. Passing values into boxed trait objects.

Configuring your DNS through to your Azure Kubernetes Cluster

Note: I’m going to have to list the steps I think I took to buy a domain name on the Azure Portal, as I didn’t note all the steps down at the time – so please double check things when creating your own.

You can create you domain wherever you like, I happened to have decided to create mine via Azure.

  • Go to the Azure portal
  • Search for DNS Zones and click the Create button
  • Supply your subscription and resource group
  • Select your domain name
  • Click the Review + Create then create to DNS Zone

To set up the DNS zone (I cannot recall if this was part of the above or a separate step), run

az network dns zone create \
--resource-group {RESOURCE_GROUP} \
--name {DOMAN_NAME}

I’m going to assume you have Kubernetes installed.

We need a way to get from the outside world into our Kubernetes cluster so we’ll create and ingress controller using

helm install ingress-nginx ingress-nginx/ingress-nginx \
--create-namespace --namespace ingress-nginx \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \
--set controller.service.externalTrafficPolicy=Local

Next we need to update our DNS record to use the EXTERNAL_IP of the ingress controller we’ve just created, so

  • Run the following to get the EXTERNAL_IP
    kubectl get svc ingress-nginx-controller -n ingress-n
    
  • You can go into the DNS record and change the A record (@ Type and any other subdomains you’ve added) to use the EXTERNAL_IP address or use
    az network dns record-set a add-record --resource-group {RESOURCE_GROUP} \ 
    --zone-name {DOMAIN_NAME} --record-set-name --ipv4-address {EXTERNAL_IP}
    

At this point you’ll obviously need to set up your service with it’s own ingress using your domain in the “host” value of the ingress.

A simple web API in various languages and deployable to Kubernetes (Node/Typescript)

Continuing this short series of writing a simple echo service web API along with the docker and k8s requirements, we’re now going to turn our attention to a Node implementation.

Add the file app.ts to the /src folder with the following content

import express, { Request, Response } from 'express';
import bodyParser from 'body-parser';

const app = express();
const PORT = process.env.PORT || 8080;

app.use(bodyParser.json());


app.get('/echo', (req: Request, res: Response) => {
  const queryParams = req.query;
  res.type('text/plain');
  res.send(`Node Echo: ${queryParams.text}`);
});

app.get('/livez', (_req: Request, res: Response) => {
  res.sendStatus(200);
});

app.get('/readyz', async (_req: Request, res: Response) => {
  try {
    res.sendStatus(200);
  } catch (err) {
    res.status(503).send('Service not ready');
  }
});

app.listen(PORT, () => {
  console.log(`Echo service is live at http://localhost:${PORT}`);
});

Dockerfile

Next up we need to create our Dockerfile

FROM node:24-alpine

WORKDIR /app

COPY package.json package-lock.json* tsconfig.json ./

RUN npm install

COPY src ./src

#RUN npx tsc

EXPOSE 8080

CMD ["npx", "ts-node", "src/app.ts"]

Note: In Linux port 80 might be locked down, hence we use port 8080 by default.

To build this, run

docker build -t putridparrot.echo_service:v1 .

Don’t forget to change the name to your preferred name.

and to test this, run

docker run -p 8080:8080 putridparrot.echo_service:v1

Kubernetes

If all wen well we’ve not tested our application and see it working from a docker image, so now we need to create the deployment etc. for Kubernete’s. Let’s assume you’ve pushed you image to Docker or another container registry such as Azure – I’m call my container registry putridparrotreg.

I’m also not going to use helm at this point as I just want a (relatively) simple yaml file to run from kubectl, so create a deployment.yaml file, we’ll store all the configurations, deployment, service and ingress in this one file jus for simplicity.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: echo
  namespace: dev
  labels:
    app: echo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: echo
  template:
    metadata:
      labels:
        app: echo
    spec:
      containers:
      - name: echo
        image: putridparrotreg/putridparrot.echo_service:v1
        ports:
        - containerPort: 8080
        resources:
          requests:
            memory: "100Mi"
            cpu: "100m"
          limits:
            memory: "200Mi"
            cpu: "200m"
        livenessProbe:
          httpGet:
            path: /livez
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /readyz
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 5

---
apiVersion: v1
kind: Service
metadata:
  name: echo_service
  namespace: dev
  labels:
    app: echo
spec:
  type: ClusterIP
  selector:
    app: echo 
  ports:
  - name: http
    port: 80
    targetPort: 8080
    protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: echo-ingress
  namespace: dev
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: mydomain.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: echo_service
            port:
              number: 80

Don’t forget to change the “host” and image to suit, also this assume you created a namespace “dev” for your app. See Creating a local container registry for information on setting up your own container registry.