Author Archives: purpleblob

A simple web API in various languages and deployable to Kubernetes (Go)

Continuing this short series of writing a simple echo service web API along with the docker and k8s requirements, we’re now going to turn our attention to a Go implementation.

Implementation

I’m using JetBrains GoLand for this project, so I created a project named echo_service.

If it doesn’t exist add the go.mod file or update it to have following

module echo_service

go 1.24

add a main.go file with the following

package main

import (
	"fmt"
	"net/http"
)

func livezHandler(w http.ResponseWriter, r *http.Request) {
	w.WriteHeader(http.StatusOK)
	w.Write([]byte("ok\n"))
}

func readyzHandler(w http.ResponseWriter, r *http.Request) {
	w.WriteHeader(http.StatusOK)
	w.Write([]byte("ok\n"))
}

func echoHandler(w http.ResponseWriter, r *http.Request) {
	text := r.URL.Query().Get("text")
	fmt.Fprintf(w, "Go Echo: %s\n", text)
}

func main() {
	http.HandleFunc("/echo", echoHandler)
	http.HandleFunc("/livez", livezHandler)
	http.HandleFunc("/readyz", readyzHandler)

	fmt.Println("Echo service running on port 8080...")
	err := http.ListenAndServe(":8080", nil)
	if err != nil {
		fmt.Println("Failed to start the service")
		return
	}
}

Dockerfile

Next up we need to create our Dockerfile

FROM golang:1.24-alpine
WORKDIR /app
COPY . .
RUN go build -o echo_service .
CMD ["./echo_service"]

Note: In Linux port 80 might be locked down, hence we use port 8080 by default.

To build this, run

docker build -t putridparrot.echo_service:v1 .

Don’t forget to change the name to your preferred name.

and to test this, run

docker run -p 8080:8080 putridparrot.echo_service:v1

Kubernetes

If all wen well we’ve not tested our application and see it working from a docker image, so now we need to create the deployment etc. for Kubernete’s. Let’s assume you’ve pushed you image to Docker or another container registry such as Azure – I’m call my container registry putridparrotreg.

I’m also not going to use helm at this point as I just want a (relatively) simple yaml file to run from kubectl, so create a deployment.yaml file, we’ll store all the configurations, deployment, service and ingress in this one file jus for simplicity.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: echo
  namespace: dev
  labels:
    app: echo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: echo
  template:
    metadata:
      labels:
        app: echo
    spec:
      containers:
      - name: echo
        image: putridparrotreg/putridparrot.echo_service:v1
        ports:
        - containerPort: 8080
        resources:
          requests:
            memory: "100Mi"
            cpu: "100m"
          limits:
            memory: "200Mi"
            cpu: "200m"
        livenessProbe:
          httpGet:
            path: /livez
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /readyz
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 5

---
apiVersion: v1
kind: Service
metadata:
  name: echo_service
  namespace: dev
  labels:
    app: echo
spec:
  type: ClusterIP
  selector:
    app: echo 
  ports:
  - name: http
    port: 80
    targetPort: 8080
    protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: echo-ingress
  namespace: dev
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: mydomain.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: echo_service
            port:
              number: 80

Don’t forget to change the “host” and image to suit, also this assume you created a namespace “dev” for your app. See Creating a local container registry for information on setting up your own container registry.

A simple web API in various languages and deployable to Kubernetes (Python)

Continuing this short series of writing a simple echo service web API along with the docker and k8s requirements, we’re now going to turn our attention to a Python implementation.

Implementation

I’m using JetBrains PyCharm for this project, so I created a project named echo_service.

Next, add the file app.py with the following code

from flask import Flask, request

app = Flask(__name__)

@app.route('/echo')
def echo():
    text = request.args.get('text', '')
    return f"Python Echo: {text}", 200

@app.route('/livez')
def livez():
    return "OK", 200

@app.route('/readyz')
def readyz():
    return "Ready", 200

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=8080)

Add a requirements.txt file with the following

flask
gunicorn

Don’t forget to install the packages via the IDE.

Dockerfile

Next up we need to create our Dockerfile

# Use a lightweight Python base
FROM python:3.11-slim

WORKDIR /app

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY app.py .

CMD ["gunicorn", "-w", "2", "-b", "0.0.0.0:8080", "app:app"]

Note: we’ll be using gunicorn instead of the development server.

Note: In Linux port 80 might be locked down, hence we use port 8080 by default.

To build this, run

docker build -t putridparrot.echo_service:v1 .

Don’t forget to change the name to your preferred name.

and to test this, run

docker run -p 8080:8080 putridparrot.echo_service:v1

Kubernetes

If all wen well we’ve not tested our application and see it working from a docker image, so now we need to create the deployment etc. for Kubernete’s. Let’s assume you’ve pushed you image to Docker or another container registry such as Azure – I’m call my container registry putridparrotreg.

I’m also not going to use helm at this point as I just want a (relatively) simple yaml file to run from kubectl, so create a deployment.yaml file, we’ll store all the configurations, deployment, service and ingress in this one file jus for simplicity.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: echo
  namespace: dev
  labels:
    app: echo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: echo
  template:
    metadata:
      labels:
        app: echo
    spec:
      containers:
      - name: echo
        image: putridparrotreg/putridparrot.echo_service:v1
        ports:
        - containerPort: 8080
        resources:
          requests:
            memory: "100Mi"
            cpu: "100m"
          limits:
            memory: "200Mi"
            cpu: "200m"
        livenessProbe:
          httpGet:
            path: /livez
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /readyz
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 5

---
apiVersion: v1
kind: Service
metadata:
  name: echo_service
  namespace: dev
  labels:
    app: echo
spec:
  type: ClusterIP
  selector:
    app: echo 
  ports:
  - name: http
    port: 80
    targetPort: 8080
    protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: echo-ingress
  namespace: dev
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: mydomain.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: echo_service
            port:
              number: 80

Don’t forget to change the “host” and image to suit, also this assume you created a namespace “dev” for your app. See Creating a local container registry for information on setting up your own container registry.

A simple web API in various languages and deployable to Kubernetes (Rust)

Continuing this short series of writing a simple echo service web API along with the docker and k8s requirements, we’re now going to turn our attention to a Rust implementation.

Implementation

I’m using JetBrains RustRover for this project, so I created a project named echo_service.

Next, add the following to the dependencies of Cargo.toml

axum = "0.7"
tokio = { version = "1", features = ["full"] }
serde = { version = "1", features = ["derive"] }

and now the main.rs can be replaced with

use axum::{
    routing::get,
    extract::Query,
    http::StatusCode,
    response::IntoResponse,
    Router,
};
use tokio::net::TcpListener;
use axum::serve;
use std::net::SocketAddr;
use serde::Deserialize;

#[derive(Deserialize)]
struct EchoParams {
    text: Option<String>,
}

async fn echo(Query(params): Query<EchoParams>) -> String {
    format!("Rust Echo: {}", params.text.unwrap_or_default())
}

async fn livez() -> impl IntoResponse {
    (StatusCode::OK, "OK")
}

async fn readyz() -> impl IntoResponse {
    (StatusCode::OK, "Ready")
}

#[tokio::main]
async fn main() {
    let app = Router::new()
        .route("/echo", get(echo))
        .route("/livez", get(livez))
        .route("/readyz", get(readyz));

    let addr = SocketAddr::from(([0, 0, 0, 0], 8080));
    println!("Running on http://{}", addr);

    let listener = TcpListener::bind(addr).await.unwrap();
    serve(listener, app).await.unwrap();

}

Dockerfile

Next up we need to create our Dockerfile

FROM rust:1.72-slim AS builder

WORKDIR /app
COPY . .

RUN cargo build --release

FROM debian:bookworm-slim

RUN apt-get update && apt-get install -y ca-certificates && \
    rm -rf /var/lib/apt/lists/*

COPY --from=builder /app/target/release /usr/local/bin/echo_service

RUN chmod +x /usr/local/bin/echo_service

EXPOSE 8080

ENTRYPOINT ["/usr/local/bin/echo_service/echo_service"]

Note: In Linux port 80 might be locked down, hence we use port 8080 by default.

To build this, run

docker build -t putridparrot.echo_service:v1 .

Don’t forget to change the name to your preferred name.

and to test this, run

docker run -p 8080:8080 putridparrot.echo_service:v1

Kubernetes

If all wen well we’ve not tested our application and see it working from a docker image, so now we need to create the deployment etc. for Kubernete’s. Let’s assume you’ve pushed you image to Docker or another container registry such as Azure – I’m call my container registry putridparrotreg.

I’m also not going to use helm at this point as I just want a (relatively) simple yaml file to run from kubectl, so create a deployment.yaml file, we’ll store all the configurations, deployment, service and ingress in this one file jus for simplicity.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: echo
  namespace: dev
  labels:
    app: echo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: echo
  template:
    metadata:
      labels:
        app: echo
    spec:
      containers:
      - name: echo
        image: putridparrotreg/putridparrot.echo_service:v1
        ports:
        - containerPort: 8080
        resources:
          requests:
            memory: "100Mi"
            cpu: "100m"
          limits:
            memory: "200Mi"
            cpu: "200m"
        livenessProbe:
          httpGet:
            path: /livez
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /readyz
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 5

---
apiVersion: v1
kind: Service
metadata:
  name: echo_service
  namespace: dev
  labels:
    app: echo
spec:
  type: ClusterIP
  selector:
    app: echo 
  ports:
  - name: http
    port: 80
    targetPort: 8080
    protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: echo-ingress
  namespace: dev
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: mydomain.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: echo_service
            port:
              number: 80

Don’t forget to change the “host” and image to suit, also this assume you created a namespace “dev” for your app. See Creating a local container registry for information on setting up your own container registry.

A simple web API in various languages and deployable to Kubernetes (C#)

Introduction

I’m always interested in how different programming languages and their libs/frameworks tackle the same problem. Recently the topic of writing web API’s in whatever language we wanted came up and so I thought, well let’s try to do just that.

The service is maybe too simple for a really good explanation of the frameworks and language features of the languages I’m going to use, but at the same time, I wanted to just do the bare minimum to have something working but enough.

The service is a “echo” service, it will have an endpoint that simply passes back what’s sent to it (prefixed with some text) and also supply livez and readyz as I want to also create a Dockerfile and the associated k8s yaml files to deploy the service.

The healthz endpoint is deprecated as of k8s v1.16, so we’ll leave that one out.

It should be noted that there are (in some cases) other frameworks that can be used and optimisations, my interest is solely to get some basic Web API deployed to k8s that works, so you may have preferences for other ways to do this.

C# Minimal API

Let’s start with an ASP.NET core, minimal API, web API…

  • Create an ASP.NET core Web API project in Visual Studio
  • Enable container support and I’ve chosen Linux OS
  • Ensure Container build type is set to Dockerfile
  • I’m using minimal API so ensure “User Controllers” is not checked

Now let’s just replace Program.cs with the following

using Microsoft.AspNetCore.Diagnostics.HealthChecks;

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
builder.Services.AddHealthChecks();

var app = builder.Build();

if (app.Environment.IsDevelopment())
{
    app.UseSwagger();
    app.UseSwaggerUI();
}

app.UseHttpsRedirection();

app.MapGet("/echo", (string text) =>
    {
        app.Logger.LogInformation($"C# Echo: {text}");
        return $"Echo: {text}";
    })
    .WithName("Echo")
    .WithOpenApi();

app.MapHealthChecks("/livez");
app.MapHealthChecks("/readyz", new HealthCheckOptions
{
    Predicate = _ => true
});

app.Run();

Docker

Next we need to copy the Dockerfile from the csproj folder to the sln folder – for completeness here’s the Dockerfile generated by Visual Studio (comments removed)

FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS base
USER $APP_UID
WORKDIR /app
EXPOSE 8080
EXPOSE 8081

FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build
ARG BUILD_CONFIGURATION=Release
WORKDIR /src
COPY ["EchoService/EchoService.csproj", "EchoService/"]
RUN dotnet restore "./EchoService/EchoService.csproj"
COPY . .
WORKDIR "/src/EchoService"
RUN dotnet build "./EchoService.csproj" -c $BUILD_CONFIGURATION -o /app/build

FROM build AS publish
ARG BUILD_CONFIGURATION=Release
RUN dotnet publish "./EchoService.csproj" -c $BUILD_CONFIGURATION -o /app/publish /p:UseAppHost=false

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "EchoService.dll"]

Note: In Linux port 80 might be locked down, hence we use port 8080 by default.

To build this, run

docker build -t putridparrot.echo-service:v1 .

Don’t forget to change the name to your preferred name.

and to test this, run

docker run -p 8080:8080 putridparrot.echo-service:v1

and we can text using “http://localhost:8080/echo?text=Putridparrot”

Kubernetes

If all wen well we’ve not tested our application and see it working from a docker image, so now we need to create the deployment etc. for Kubernete’s. Let’s assume you’ve pushed you image to Docker or another container registry such as Azure – I’m call my container registry putridparrotreg.

I’m also not going to use helm at this point as I just want a (relatively) simple yaml file to run from kubectl, so create a deployment.yaml file, we’ll store all the configurations, deployment, service and ingress in this one file jus for simplicity.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: echo
  namespace: dev
  labels:
    app: echo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: echo
  template:
    metadata:
      labels:
        app: echo
    spec:
      containers:
      - name: echo
        image: putridparrotreg/putridparrot.echo-service:v1
        ports:
        - containerPort: 8080
        resources:
          requests:
            memory: "100Mi"
            cpu: "100m"
          limits:
            memory: "200Mi"
            cpu: "200m"
        livenessProbe:
          httpGet:
            path: /livez
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /readyz
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 5

---
apiVersion: v1
kind: Service
metadata:
  name: echo-service
  namespace: dev
  labels:
    app: echo
spec:
  type: ClusterIP
  selector:
    app: echo 
  ports:
  - name: http
    port: 80
    targetPort: 8080
    protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: echo-ingress
  namespace: dev
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: mydomain.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: echo-service
            port:
              number: 80

Don’t forget to change the “host” and image to suit, also this assume you created a namespace “dev” for your app. See Creating a local container registry for information on setting up your own container registry.

Creating a local container registry

I feel love I’ve written a post on this before, but it doesn’t hurt to keep things upto date.

I set up a Kubernetes instance along with container registry etc. within Azure, but if all you want to do is run things locally and at zero cost (other than you usual cost of running a computer), you might want to set up a local container registry.

I’m doing this on Windows for this post, but I expect it’s pretty much the same on Linux and Mac, but in my case I am also running Docker Desktop.

docker-compose

We’re going to stand up the registry using docker-compose, but if you’d like to just run the registry from the simple docker run command, you can use this

docker run -d -p 5000:5000 --name registry registry:2

However you’ll probably want a volume set-up, along with a web UI, so let’s put all that together into the docker-compose.yml file below

version: '3.8'

services:
  registry:
    image: registry:2
    container_name: container-registry
    ports:
      - "5000:5000"
    environment:
      REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /var/lib/registry
      REGISTRY_HTTP_HEADERS_Access-Control-Allow-Origin: '["http://localhost:8080"]'
      REGISTRY_HTTP_HEADERS_Access-Control-Allow-Methods: '["GET", "HEAD", "DELETE"]'
      REGISTRY_HTTP_HEADERS_Access-Control-Allow-Credentials: '["true"]'
    volumes:
      - registry-data:/var/lib/registry

  registry-ui:
    image: joxit/docker-registry-ui:latest
    container_name: registry-ui
    ports:
      - "8080:80"
    environment:
      - REGISTRY_TITLE=Private Docker Registry
      - REGISTRY_URL=http://localhost:5000
      - DELETE_IMAGES=true
    depends_on:
      - registry

volumes:
  registry-data:

Note: if you’re going to be running your services on port 8080, you might wish to change the UI here to use port 8081, for example.

In the above I name my container container-registry as I already have a registry container running on my machine, the REGISTRY_HTTP_HEADERS_Access were required as I was getting CORS like issues. Finally we run up the joxit/docker-registry-ui which gives us a web UI to our registry.

To run everything just type

docker-compose up

and use ctrl+c to bring this down when in interactive mode or run

docker-compose down

If you want to clear up the volume (i.e. remove it) use

docker-compose down -v

Ofcourse you can also use curl etc. to interact with the registry itself, for example to list the repositories

curl http://localhost:500/v2/_catalog

Tag and push

We’re obviously going to need to push images to the registry and this is done by first, tagging them (as usual) then pushing, so for example

docker tag putridparrot.echo_service:v1 localhost:5000/putridparrot.echo_service:v1

which tags the image I already created for a simple “echo service”.

Next we push the tagged image to the registry using

docker push localhost:5000/putridparrot.echo_service:v1

If you’re running the web UI you should be able to see the repository with your new tagged image.

Pull

Obviously we’ll want to be able to pull an image from the registry either to run locally or to deploy within a Kubernetes cluster etc.

docker pull localhost:5000/putridparrot.echo_service:v1

Deployment

If some coming posts I will be writing an echo service in multiple languages, so let’s assume this echo_services is one of those. We’re going to want to run things locally so we might have a deployments.yaml with the following deployment and service

apiVersion: apps/v1
kind: Deployment
metadata:
  name: echo
  namespace: dev
  labels:
    app: echo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: echo
  template:
    metadata:
      labels:
        app: echo
    spec:
      containers:
      - name: echo
        image: localhost:5000/putridparrot.echo_service:v1
        ports:
        - containerPort: 8080
        resources:
          requests:
            memory: "100Mi"
            cpu: "100m"
          limits:
            memory: "200Mi"
            cpu: "200m"
        livenessProbe:
          httpGet:
            path: /livez
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /readyz
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 5

---
apiVersion: v1
kind: Service
metadata:
  name: echo-service
  namespace: dev
  labels:
    app: echo
spec:
  type: ClusterIP
  selector:
    app: echo 
  ports:
  - name: http
    port: 80
    targetPort: 8080
    protocol: TCP

Now we can use port forwarding in place on an ingress service if we’d like, for testing, like this

kubectl port-forward svc/echo-service 8080:8080 -n dev

and now use http://localhost:8080/echo?text=Putridparrot

Other options to get this deployment running with ingress require hosts file changes or we can add a load balancer, for example

apiVersion: v1
kind: Service
metadata:
  name: echo-service
spec:
  type: LoadBalancer
  selector:
    app: echo
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 8080

Adding Nginx Ingress controller to your Kubernetes cluster

You’ve created your Kubernetes cluster, added a service, therefore you’ve set up deployments, services and ingress but now you want to expose the cluster to the outside world.

We need to add am ingress controller such as nginx.

  • helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
    
  • helm repo update
    
  • helm install ingress-nginx ingress-nginx/ingress-nginx \
      --create-namespace \
      --namespace ingress-nginx \
      --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \
      --set controller.service.externalTrafficPolicy=Local
    

Note: In my case my namespace is ingress-nginx, but you can set to what you prefer.

Now I should say, originally I installed using

helm install ingress-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --create-namespace

and I seemed to not be able to reach this from the web, but including here just for reference.

To get your EXTERNAL-IP, i.e. the one exposed to the web, use the following (replace the -n with the namespace you used).

kubectl get svc ingress-nginx-controller -n ingress-nginx

If you’re using some script to get the IP, you can extract just that using

kubectl get svc ingress-nginx-controller -n ingress-nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}'

Now my IP is not static, so upon redeploying the controller it’s possible this might change, so be aware of this. Ofcourse to get around it you could create a static IP with Azure (at a nominal cost).

Still not accessing your services, getting 404’s with Nginx web page displayed ?

Remember that in your ingress (i.e. the services ingress), you might have something similar to below.

Here, we set the host name, hence in this case the service will NOT be accessed via the IP itself, it needs to be used via the domain name so it matches against the ingress for the service.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: hello-ingress
  namespace: development
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: mydomain.com  # Replace with your actual domain
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: hello-service
            port:
              number: 80

Adding Prometheus/Grafana to your Kubernetes Cluster

Let’s assume you’ve got a running Kubernetes cluster set up, with some services running and you’d like to monitor the state of the cluster/namespace or specific pods. Using Prometheus along with Grafana is a great solution.

To deploy, just follow these steps by executing the following commands (you’ll need helm installed).

  • helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
  • helm repo update
  • helm install prometheus prometheus-community/kube-prometheus-stack –namespace monitoring –create-namespace

With the last command you should see something like the following

Response was
NAME: prometheus
LAST DEPLOYED: Sat Jul 26 12:03:34 2025
NAMESPACE: monitoring
STATUS: deployed
REVISION: 1
NOTES:
kube-prometheus-stack has been installed. Check its status by running:

Now to check everything is working and to use Grafana Dashboards, simply use port forwarding, i.e.

kubectl port-forward svc/prometheus-grafana 80:80 -n monitoring

Now you can access http://localhost:80 (or whatever port has been set up).

Default login credentials are username/password admin/prom-operator.

Azure Managed Prometheus

You can also use Azure’s managed Prometheus instance.

I’ve not tried setting this up via Azure, so the steps below are taken from another source (sorry I cannot recall where), feel free to try them

  • Enable Azure Monitor Managed Prometheus
  • Go to your AKS cluster in the Azure Portal.
  • Under Monitoring, enable Managed Prometheus.
  • Create Azure Managed Grafana
  • Use the Azure Portal or CLI to create a Grafana instance.
  • Link it to your Azure Monitor workspace.
  • Configure Grafana Data Source
  • In Grafana, add a Prometheus data source.
  • Use the Azure Monitor workspace query endpoint.
  • Assign Permissions
  • Ensure your Grafana instance or app registration has the Monitoring Data Reader role on the workspace.

Windows Terminal History

I really like the Windows Terminal application, it’s autocomplete is really useful for reminding me of commands I’ve used, but occasionally you have commands that are similar and one way to check what’s in the list that makes up the autocompletion options is to look at the ConsoleHost_history.txt file

$env:APPDATA\Microsoft\Windows\PowerShell\PSReadLine\ConsoleHost_history.txt

dotnet says the runtime and sdk exists on Linux but Rider and Visual Code thinks they don’t

As part of my Macbook Air using Linux Mint, I’ve been getting my development environments set up.

I got .NET 8 and .NET 9 installed so that running both of the following listed them

dotnet --list-sdks
dotnet --line-runtimes

But Visual Studio Code and Rider complain that .NET 8 and .NET 9 need to be installed.

It was this Stackoverflow post that had the answer.

I deleted the /etc/dotnet folder as per the previous link suggests and things started working.

Installing Linux on my old Macbook Air

I’ve an old Intel based Macbook Air which, unfortunately Apple no longer support updates for, it’s a great little machine which I want to keep running. So, I do what I always do when the OS outgrows the hardware, I install Linux on it.

I would usually install Ubuntu but thought I’d try something lighter weight, by installing Linux Mint.

  • The first thing we need to do is download the ISO from the Linux Mint site (I went with the Cinnamon Edition).
  • New we need to write the ISO to a USB drive to create a bootable USB, I used Ubuntu and already had the tools for this but Etcher seems to be the go to application for this nowadays, so you could download/install this and create your bootable USB from the ISO.

Once you have the bootable USB, it’s over to your Mac.

  • Boot your Mac up whilst holding the Option key to go to the startup manager
  • Select the EFI drive (coloured orange)
  • Select “Start Cinnamon” option

The above will take you into a LIVE mode, i.e. you’ve not installed Linux but are running from the USB. Let’s check everything works for you before you commit to installing Linux Mint. From the Linux Mint, Cinnamon desktop…

  • Open Drive Manager look for your network adapter
  • Apply/install any required drivers
  • From network icon in desktop select your WiFi network

Play around with the OS and check it’s what you want to install and all works, then if you’re going to commit and overwrite MacOS

  • Double click the Install Linux Mint icon

I had an issue when I installed Mint, in that it no longer installed the drivers for the WiFi when I went to Driver Manager to select them. Instead it wanted to go online to get them – difficult as the WiFi drivers are what I’m trying to install and I had no ethernet attachment and to be honest, I know the drivers are on the USB as it worked in LIVE mode.

To get things working, I found that if I mounted the USB drive and searched for the Broadcom packages (and these are what was installed via testing in LIVE) and now double clicking on the two files/packages within the USB installation directories, then I could install the drivers and everything worked.

Disclaimer: I’ve only just installed Linux Mint on the Macbook Air, so I haven’t down any extensive testing.