Category Archives: Kubernetes

Increasing the body size of requests (with your ASP.NET core application within Kubernetes)

I cam across an interesting issue whereby we wanted to allow larger files to be uploaded through our ASP.NET core API, through to an Azure Function. All this hosted within Kubernetes.

The first thing is, if you’re running through something like Cloudflare, Akamai, Traffic Manager, changes there are outside the scope of this post.

Kubernetes Ingress

Let’s first look at Kubernetes, the ingress controller to you application may have something like this

className: nginx
annotations:
  nginx.ingress.kubernetes.io/proxy-buffer-size: "100m"
  nginx.ingress.kubernetes.io/proxy-body-size: "100m"
...

In the above we set the buffer and body size to 100MB – now one thing to note is that when we had this closer to the actual file size we wanted to support, the request body seemed larger, so you might need to tweak things a little.

Kestrel

The change in Kubernetes ingress now allows requests of upto 100MB but you may now find the request rejected by ASP.NET core, or more specifically Kestrel.

Kestrel (at the time of writing) has a default MaxRequestBodySize of 30MB, so we need to add the following

builder.WebHost.ConfigureKestrel(serverOptions =>
{
  serverOptions.Limits.MaxRequestBodySize = 104857600; // 100 MB in bytes
});

Azure Functions

Next up, we’re using Azure functions and by default (when on the pro consumption plan) is 100MB, however if you need to or want to change/fix this in place, you can edit the host.json file to include this

"http": {
  "maxRequestBodySize": 100
}

Obviously if you have code in place anywhere that also acts as a limit, you’ll need to amend that as well.

Anything else?

Depending on the size of files and the time it takes to process, you might also need to review your timouts on HttpClient or whatever mechanism you’re using.

Init containers in Kubernetes

Init containers can be used to perform initialization logic before the main containers run, these might include

  • Waiting for a service to become available
  • Run database migrations
  • Copying files to a shared location
  • Configuration set-up

Init containers must run sequentially and complete successfully.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      initContainers:
      - name: wait-for-db
        image: busybox
        command: ['sh', '-c', 'until nc -z db-service 5432; do echo waiting; sleep 2; done']
      containers:
      - name: app
        image: my-app-image
        ports:
        - containerPort: 8080

This waits for a PostgreSQL service to become available before our application can start.

Webhooks in Kubernetes

Webhooks are HTTP callbacks triggered by the Kubernetes API server during resource operations.

There are two main types

  • Mutating Webhook: Modify or inject fields into a resource
  • Validating Webook: Accept or reject a resource based upon logic

A validating webhook configuration

apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  name: validate-pods.k8s.io
webhooks:
  - name: podcheck.k8s.io
    rules:
      - apiGroups: [""]
        apiVersions: ["v1"]
        operations: ["CREATE"]
        resources: ["pods"]
    clientConfig:
      service:
        name: pod-validator
        namespace: default
        path: "/validate"
      caBundle: <base64-ca>
    admissionReviewVersions: ["v1"]
    sideEffects: None

Essentially k8s web hooks give us the opportunity to intercept k8s API requests such as CREATE, UPDATE or DELETE. Using webhooks we can accept of reject requests without modifying the k8s object.

In the example YAML above, we’re going to intercept CREATE calls for pods. This is a validate-pods.k8s.io or validating web hook, which is non-mutating and can reject requests but not modify them. The name of the web hook is podcheck.k8s.io and then we have the rules, which we’ve already touched on. Then we have the clientConfig which will use our pod-validator service in the default namespace and the path /validate. For example this would mean a service is accessible via https://pod-validator.default.svc/validate. The sideEffects of None means this webhook doesn’t write to external systems, hence is safe for retries.

The webhook server must expose an HTTPS endpoint which access AdmissionReview requests and should return a response to denote whether the operation can proceed.

The AdmissionReview request will look similar to this for a pod CREATE

{
  "apiVersion": "admission.k8s.io/v1",
  "kind": "AdmissionReview",
  "request": {
    "uid": "1234abcd-5678-efgh-ijkl-9012mnopqrst",
    "kind": {
      "group": "",
      "version": "v1",
      "kind": "Pod"
    },
    "resource": {
      "group": "",
      "version": "v1",
      "resource": "pods"
    },
    "requestKind": {
      "group": "",
      "version": "v1",
      "kind": "Pod"
    },
    "requestResource": {
      "group": "",
      "version": "v1",
      "resource": "pods"
    },
    "name": null,
    "namespace": "default",
    "operation": "CREATE",
    "userInfo": {
      "username": "system:serviceaccount:default:deployer",
      "uid": "abc123",
      "groups": [
        "system:serviceaccounts",
        "system:authenticated"
      ]
    },
    "object": {
      "apiVersion": "v1",
      "kind": "Pod",
      "metadata": {
        "name": "example-pod",
        "namespace": "default",
        "labels": {
          "app": "demo"
        }
      },
      "spec": {
        "containers": [
          {
            "name": "nginx",
            "image": "nginx:1.21",
            "resources": {
              "limits": {
                "cpu": "500m",
                "memory": "128Mi"
              }
            }
          }
        ]
      }
    },
    "oldObject": null,
    "dryRun": false
  }
}

A response will look something like this

{
  "apiVersion": "admission.k8s.io/v1",
  "kind": "AdmissionReview",
  "response": {
    "uid": "1234abcd-5678-efgh-ijkl-9012mnopqrst",
    "allowed": true,
    "status": {
      "code": 200,
      "message": "Pod validated successfully"
    }
  }
}

The allowed field can just be sent to false which minimal response like the one below

"allowed": false,
"status": {
  "code": 400,
  "message": "Missing required label: team"
}

Kubernetes secret resource

Kubernetes includes a secret resource store.

We can get a list of secrets via the namespace

kubectl get secrets -n dev

and for all namespaces using

kubectl get secrets --all-namespaces

We can create a secret of the specified type

  • docker-registry Create a secret for use with a container registry
  • generic Create a secret from a local file, directory, or literal value, known as an Opaque secret type
  • tls Create a TLS secret, such as a TLS certificate and its associated key

Hence we use the “specified type” as below (which uses a generic type)

kubectl create secret generic my-secret \
  --from-literal=username=admin \
  --from-literal=password=secret123 \
  -n dev

With the above command, we created a secret with the name my-secret and the key username with value admin followed by another key/value.

A secret can be created using Kubernetes YAML file with kind “Secret”

apiVersion: v1
kind: Secret
metadata:
  name: my-secret
type: Opaque
data:
  username: YWRtaW4=       # base64 encoded 'admin'
  password: c2VjcmV0MTIz   # base64 encoded 'secret123'

Accessing secrets, we can use the following

kubectl get secret my-secret -o jsonpath="{.data.username}" -n dev | base64 --decode
kubectl get secret my-secret -o jsonpath="{.data.username}" -n dev
[System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String("YWRtaW4=")) // insert string from the above

Or using Powershell

$encoded = kubectl get secret my-secret -o jsonpath="{.data.username}" -n dev
[System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String($encoded))

Here’s an example using a secret by including them in environment varianles

env:
  - name: DB_USER
    valueFrom:
      secretKeyRef:
        name: my-secret
        key: username

this gives us process.env.DB_USER.

Another use is mounting via the pods volume, hence it’s file system

volumes:
  - name: secret-volume
    secret:
      secretName: my-secret

volumeMounts:
  - name: secret-volume
    mountPath: "/etc/secret"
    readOnly: true

A simple Rust application using the Kube client

Rust has a kubernetes client crate which allows us to write code against Kubernetes via the client (i.e. instead of calling out to kubectl itself).

Create yourself a binary Rust application with the following dependencies

[dependencies]
k8s-openapi = { version = "0.26.0", default-features = false, features = ["v1_32"] }
kube = { version = "2.0.1", features = ["runtime", "client"] }
tokio = { version = "1.30", features = ["full"] }
anyhow = "1.0"

Note: Check the k8s-openapi features match you installed kubernetes, run kubectl version to check your server version and use this.

This is very much a starter post, so I’m going to just change main.rs to simply instantiate a client and get all pods running across all namespaces

use kube::{Api, Client};
use k8s_openapi::api::core::v1::Pod;
use kube::runtime::reflector::Lookup;

#[tokio::main]
async fn main() -> anyhow::Result<()> {

  let client = Client::try_default().await?;
  let pods: Api<Pod> = Api::all(client);

  let pod_list = pods.list(&Default::default()).await?;

  for p in pod_list {
    println!("Pod name: {:?}", p.name().expect("Pod name missing"));
  }

  Ok(())
}

If you want to use the default namespace then change the line

let pods: Api<Pod> = Api::all(client);

to

let pods: Api<Pod> = Api::default_namespaced(client);

or if you want to get pods from a specific namespace use

let pods: Api<Pod> = Api::namespaced(client, "mynamespace");

Note: Ofcourse you could use “default” for the default namespace or “” for all namespaces in place of “mynamespace”.

Code

Code is available on GitHub.

A simple web API in various languages and deployable to Kubernetes (Java)

Continuing this short series of writing a simple echo service web API along with the docker and k8s requirements, we’re now going to turn our attention to a Java implementation.

Implementation

I’m going to be using JetBrains IntelliJ.

  • Create a new Java Project, selecting Maven as the build system
  • We’re going to use Sprint Boot, so add the following to the pom.xml
    <dependencies>
      <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-web</artifactId>
        <version>3.5.5</version>
      </dependency>
    </dependencies>
    
  • We’re also going to want to use the Spring Boot Maven plugin to generate our JAR and Manifest
    <build>
      <plugins>
        <plugin>
          <groupId>org.springframework.boot</groupId>
          <artifactId>spring-boot-maven-plugin</artifactId>
          <version>3.5.5</version>
          <executions>
            <execution>
              <goals>
                <goal>repackage</goal>
              </goals>
            </execution>
          </executions>
        </plugin>
      </plugins>
    </build>
    
  • Let’s delete the supplied Main.java file and replace with one named EchoServiceApplication.java which looks like this
    package com.putridparrot;
    
    import org.springframework.boot.SpringApplication;
    import org.springframework.boot.autoconfigure.SpringBootApplication;
    
    import java.util.HashMap;
    import java.util.Map;
    
    @SpringBootApplication
    public class EchoServiceApplication {
        public static void main(String[] args) {
            SpringApplication app = new SpringApplication(EchoServiceApplication.class);
            Map<String, Object> props = new HashMap<>();
            props.put("server.port", System.getenv("PORT"));
            app.setDefaultProperties(props);
            app.run(args);
        }
    }
    

    We’re setting the PORT here from the environment variable as this will be supplied via the Dockerfile

  • Next add a new file named EchoController.java which will look like this
    package com.putridparrot;
    
    import org.springframework.web.bind.annotation.*;
    
    @RestController
    public class EchoController {
    
        @GetMapping("/echo")
        public String echo(@RequestParam(name = "text", defaultValue = "") String text) {
            return String.format("Java Echo: %s", text);
        }
    
        @GetMapping("/readyz")
        public String readyz() {
            return "OK";
        }
    
        @GetMapping("/livez")
        public String livez() {
            return "OK";
        }
    }
    

Dockerfile

Next up we need to create our Dockerfile

FROM maven:3.9.11-eclipse-temurin-21 AS builder

WORKDIR /app

COPY . .
RUN mvn clean package -DskipTests

FROM eclipse-temurin:21-jre AS runtime

WORKDIR /app

COPY --from=builder /app/target/echo_service-1.0-SNAPSHOT.jar ./echo-service.jar

ENV PORT=8080
EXPOSE 8080

ENTRYPOINT ["java", "-jar", "echo-service.jar"]

Note: In Linux port 80 might be locked down, hence we use port 8080 – to override the default port in phx we also set the environment variable PORT.

To build this, run

docker build -t putridparrot.echo_service:v1 .

Don’t forget to change the name to your preferred name.

To test this, run

docker run -p 8080:8080 putridparrot.echo_service:v1

Kubernetes

If all wen well we’ve not tested our application and see it working from a docker image, so now we need to create the deployment etc. for Kubernete’s. Let’s assume you’ve pushed you image to Docker or another container registry such as Azure – I’m call my container registry putridparrotreg.

I’m also not going to use helm at this point as I just want a (relatively) simple yaml file to run from kubectl, so create a deployment.yaml file, we’ll store all the configurations, deployment, service and ingress in this one file jus for simplicity.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: echo
  namespace: dev
  labels:
    app: echo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: echo
  template:
    metadata:
      labels:
        app: echo
    spec:
      containers:
      - name: echo
        image: putridparrotreg/putridparrot.echo_service:v1
        ports:
        - containerPort: 8080
        resources:
          requests:
            memory: "100Mi"
            cpu: "100m"
          limits:
            memory: "200Mi"
            cpu: "200m"
        livenessProbe:
          httpGet:
            path: /livez
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /readyz
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 5

---
apiVersion: v1
kind: Service
metadata:
  name: echo_service
  namespace: dev
  labels:
    app: echo
spec:
  type: ClusterIP
  selector:
    app: echo 
  ports:
  - name: http
    port: 80
    targetPort: 8080
    protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: echo-ingress
  namespace: dev
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: mydomain.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: echo_service
            port:
              number: 80

Don’t forget to change the “host” and image to suit, also this assume you created a namespace “dev” for your app. See Creating a local container registry for information on setting up your own container registry.

A simple web API in various languages and deployable to Kubernetes (Elixir)

Continuing this short series of writing a simple echo service web API along with the docker and k8s requirements, we’re now going to turn our attention to a Elixir implementation.

Implementation

I’m going to be using Visual Code and dev containers, so I created a folder echo_service which has a folder named .devcontainer with the following devcontainer.json

{
    "image": "elixir",
    "forwardPorts": [3000]
}

Next I opened Visual Code in the echo_service folder and it should detect the devtonainer and ask if you want to reopen in the devcontainer. To which we do.

I’m going to use Phoenix Server (phx), so I open the terminal in Visual Code and run the following

  • I needed to install phx installer, using

    mix archive.install hex phx_new
    
  • Next I want to generate a minimal phx server, hence run the following

    mix phx.new echo_service --no-html --no-ecto --no-mailer --no-dashboard --no-assets --no-gettext
    

    When this prompt appears, type y

    Fetch and install dependencies? [Yn]
    
  • Now cd into the newly created echo_service folder
  • To check everything is working, run
    mix phx.server
    

Next we need to add a couple of controllers (well we could just use one but I’m going to create a Echo controller and a Health controller). So in lib/echo_service_web/controllers add the files echo_controller.ex and health_controller.ex

The echo_controller.ex looks like this

defmodule EchoServiceWeb.EchoController do
  use Phoenix.Controller, formats: [:html, :json]

  def index(conn, %{"text" => text}) do
    send_resp(conn, 200, "Elixir Echo: #{text}")
  end
end

The health_controller.exe should look like this

defmodule EchoServiceWeb.HealthController do
  use Phoenix.Controller, formats: [:html, :json]

  def livez(conn, _params) do
    send_resp(conn, 200, "Live")
  end

  def readyz(conn, _params) do
    send_resp(conn, 200, "Ready")
  end
end

In the parent folder (i.e. lib/echo_service_web) edit the router.exe so it looks like this

defmodule EchoServiceWeb.Router do
  use EchoServiceWeb, :router

  pipeline :api do
    plug :accepts, ["json"]
  end

  scope "/", EchoServiceWeb do
    # pipe_through :api
    get "/echo", EchoController, :index
    get "/livez", HealthController, :livez
    get "/readyz", HealthController, :readyz
  end
end

Now we can run mix phx.server again (ctrl+c twice to shut any existing running instance).

Dockerfile

Next up we need to create our Dockerfile

FROM elixir:latest

RUN mkdir /app
COPY . /app
WORKDIR /app

RUN mix local.hex --force
RUN mix do compile

ENV PORT=8080
EXPOSE 8080

CMD ["mix", "phx.server"]

Note: In Linux port 80 might be locked down, hence we use port 8080 – to override the default port in phx we also set the environment variable PORT.

To build this, run

docker build -t putridparrot.echo_service:v1 .

Don’t forget to change the name to your preferred name.

To test this, run

docker run -p 8080:8080 putridparrot.echo_service:v1

Kubernetes

If all wen well we’ve not tested our application and see it working from a docker image, so now we need to create the deployment etc. for Kubernete’s. Let’s assume you’ve pushed you image to Docker or another container registry such as Azure – I’m call my container registry putridparrotreg.

I’m also not going to use helm at this point as I just want a (relatively) simple yaml file to run from kubectl, so create a deployment.yaml file, we’ll store all the configurations, deployment, service and ingress in this one file jus for simplicity.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: echo
  namespace: dev
  labels:
    app: echo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: echo
  template:
    metadata:
      labels:
        app: echo
    spec:
      containers:
      - name: echo
        image: putridparrotreg/putridparrot.echo_service:v1
        ports:
        - containerPort: 8080
        resources:
          requests:
            memory: "100Mi"
            cpu: "100m"
          limits:
            memory: "200Mi"
            cpu: "200m"
        livenessProbe:
          httpGet:
            path: /livez
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /readyz
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 5

---
apiVersion: v1
kind: Service
metadata:
  name: echo_service
  namespace: dev
  labels:
    app: echo
spec:
  type: ClusterIP
  selector:
    app: echo 
  ports:
  - name: http
    port: 80
    targetPort: 8080
    protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: echo-ingress
  namespace: dev
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: mydomain.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: echo_service
            port:
              number: 80

Don’t forget to change the “host” and image to suit, also this assume you created a namespace “dev” for your app. See Creating a local container registry for information on setting up your own container registry.

Investigating pod resources and usage

Top

kubectl top pod
// Or we could use labels, for example app=ui, app=proxy etc.
kubectl top pod -l 'app in (ui, proxy, api)' -n my-namespace

Check the pods configuration

kubectl describe pod <pod-name> | grep -A5 "Limits"

Prints the five lines after the “Limits” section, for example

Limits:
  cpu:     500m
  memory:  1Gi
Requests:
  cpu:      50m
  memory:   256Mi

Resource Quotas

kubectl get resourcequotas
kubectl get resourcequotas -n my-namesapce
kubectl describe resourcequota {name from above call} -n my-namespace

CPU Throttling

kubectl exec <pod-name> -- cat /sys/fs/cgroup/cpu.stat
kubectl exec <pod-name> -- cat /sys/fs/cgroup/cpu/cpu.stat

For example

usage_usec 177631637
user_usec 89639616
system_usec 87992020
nr_periods 191754
nr_throttled 271
throttled_usec 11291159

– nr_periods – The number of scheduling periods that have occurred.
– nr_throttled – The number of times the process was throttled due to exceeding CPU limits.

Kubernetes port forwarding

We might deploy a something to a pod which doesn’t have an external interface or we just want to debug our deployed pod without going through load balancers etc. Kubernetes allows us to essentially connect and redirect a pod via it’s port, so for example I might have pod name “my-pod” on port 5000 within Kubernetes. I want to access this via curl or a browser or whatever.

Hence we use the following command

kubectl port-forward pod/my-pod 8080:5000

and now to access the application running in this pod using something like this

curl localhost:8080