Monthly Archives: October 2025

Rust and Sqlite

Add the dependencies

[dependencies]
rusqlite = { version = "0.37.0", features = ["bundled"] }

The bundled part will automatically compile and link an upto date SQLite, without this I got errors such as “LINK : fatal error LNK1181: cannot open input file ‘sqlite3.lib'”, obviously if you have everything installed for SQLite, then you might prefer the non-bundled dependency, so just replace this with.

[dependencies]
rusqlite = "0.37.0"

Create a DB

Now let’s create a database as a file and insert an initial row of data

use rusqlite::Connection;

fn main() {
    let connection = Connection::open("./data.db3").unwrap();
    connection.execute("CREATE TABLE app (id INTEGER PRIMARY KEY, name TEXT NOT NULL", ()).unwrap();
    connection.execute("INSERT INTO app (id, name) VALUES (?, ?)", (1, "Hello")).unwrap();
}

We could also do this in memory using the following

let connection = Connection::open_in_memory().unwrap();

Reading from our DB

We’ll create a simple structure representing the DB created above

#[derive(Debug)]
struct App {
    id: i32,
    name: String,
}

Now to read into this we use the following

let mut statement = connection.prepare("SELECT * FROM app").unwrap();
let app_iter = statement.query_map([], |row| {
  Ok(App {
    id: row.get(0)?,
    name: row.get(1)?,
  })
}).unwrap();

for app in app_iter {
  println!("{:?}", app.unwrap());
}

You’ll also need the following use clause

use rusqlite::fallible_iterator::FallibleIterator;

Init containers in Kubernetes

Init containers can be used to perform initialization logic before the main containers run, these might include

  • Waiting for a service to become available
  • Run database migrations
  • Copying files to a shared location
  • Configuration set-up

Init containers must run sequentially and complete successfully.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      initContainers:
      - name: wait-for-db
        image: busybox
        command: ['sh', '-c', 'until nc -z db-service 5432; do echo waiting; sleep 2; done']
      containers:
      - name: app
        image: my-app-image
        ports:
        - containerPort: 8080

This waits for a PostgreSQL service to become available before our application can start.

Using Garnet

Garnet is a Redis (RESP) compatible cache from Microsoft Research, it’s used internally within Microsoft but as it’s a research project it’s possible the design etc. will change/evolve.

Not only is is Redis compatible, it’s written in C#, so ideal for .NET native environments. Check out the Garnet website for more information

I’ve shown code to interact from C#/.NET to Redis in the past, the same code will work with Garnet.

Here’s a Dockerfile to create an instance of Garnet

services:
  garnet:
    image: 'ghcr.io/microsoft/garnet'
    ulimits:
      memlock: -1
    container_name: garnet
    ports:
      - "6379:6379"

Creating my auth token for use in Postman

I’ve a simple set of calls to my application’s endpoints and occasionally use Postman to test them or to simply call and see what the results look like. However my calls all require authentication tokens.

The aim here is that when I require a authentication token I’ll call a local app which gets them for me and I want Postman to call my code to retrieve as accessToken which can be used by Postman for subsequent calls

Let’s set up Postman to use a variable named accessToken

  • Create a “Environments” environment or use Globals
  • Add a variable named accessToken (you can name yours whatever you want). Do not supply initial or current values and leave types as default
  • Go to your request and in the Authorization tab, select the auth method. I chose Bearer Token as that’s what my endpoint uses.
  • In the token type {{accessToken}}

At this point we have a link between the variable accessToken and the value sent into the Bearer Token but we need to generate the token and set it’s value into the variable accessToken.

  • Select the Scripts tab and Pre-request
  • Add the following
    try {
      const response = await pm.sendRequest({
        url: "https://localhost:5000/gettoken",
        method: "GET"
      });
    
      pm.environment.set("accessToken", response.text());
    } catch (err) {
      console.error(err);
    }
    

Obviously the URL in the above script should be whatever your server is and in my case I return raw text (you could have it deserialize from JSON as well ofcourse).

That’s it – Send the request, Postman calls your service to get the token and assigns it to accessToken and your Postman request should be authenticated.

Webhooks in Kubernetes

Webhooks are HTTP callbacks triggered by the Kubernetes API server during resource operations.

There are two main types

  • Mutating Webhook: Modify or inject fields into a resource
  • Validating Webook: Accept or reject a resource based upon logic

A validating webhook configuration

apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  name: validate-pods.k8s.io
webhooks:
  - name: podcheck.k8s.io
    rules:
      - apiGroups: [""]
        apiVersions: ["v1"]
        operations: ["CREATE"]
        resources: ["pods"]
    clientConfig:
      service:
        name: pod-validator
        namespace: default
        path: "/validate"
      caBundle: <base64-ca>
    admissionReviewVersions: ["v1"]
    sideEffects: None

Essentially k8s web hooks give us the opportunity to intercept k8s API requests such as CREATE, UPDATE or DELETE. Using webhooks we can accept of reject requests without modifying the k8s object.

In the example YAML above, we’re going to intercept CREATE calls for pods. This is a validate-pods.k8s.io or validating web hook, which is non-mutating and can reject requests but not modify them. The name of the web hook is podcheck.k8s.io and then we have the rules, which we’ve already touched on. Then we have the clientConfig which will use our pod-validator service in the default namespace and the path /validate. For example this would mean a service is accessible via https://pod-validator.default.svc/validate. The sideEffects of None means this webhook doesn’t write to external systems, hence is safe for retries.

The webhook server must expose an HTTPS endpoint which access AdmissionReview requests and should return a response to denote whether the operation can proceed.

The AdmissionReview request will look similar to this for a pod CREATE

{
  "apiVersion": "admission.k8s.io/v1",
  "kind": "AdmissionReview",
  "request": {
    "uid": "1234abcd-5678-efgh-ijkl-9012mnopqrst",
    "kind": {
      "group": "",
      "version": "v1",
      "kind": "Pod"
    },
    "resource": {
      "group": "",
      "version": "v1",
      "resource": "pods"
    },
    "requestKind": {
      "group": "",
      "version": "v1",
      "kind": "Pod"
    },
    "requestResource": {
      "group": "",
      "version": "v1",
      "resource": "pods"
    },
    "name": null,
    "namespace": "default",
    "operation": "CREATE",
    "userInfo": {
      "username": "system:serviceaccount:default:deployer",
      "uid": "abc123",
      "groups": [
        "system:serviceaccounts",
        "system:authenticated"
      ]
    },
    "object": {
      "apiVersion": "v1",
      "kind": "Pod",
      "metadata": {
        "name": "example-pod",
        "namespace": "default",
        "labels": {
          "app": "demo"
        }
      },
      "spec": {
        "containers": [
          {
            "name": "nginx",
            "image": "nginx:1.21",
            "resources": {
              "limits": {
                "cpu": "500m",
                "memory": "128Mi"
              }
            }
          }
        ]
      }
    },
    "oldObject": null,
    "dryRun": false
  }
}

A response will look something like this

{
  "apiVersion": "admission.k8s.io/v1",
  "kind": "AdmissionReview",
  "response": {
    "uid": "1234abcd-5678-efgh-ijkl-9012mnopqrst",
    "allowed": true,
    "status": {
      "code": 200,
      "message": "Pod validated successfully"
    }
  }
}

The allowed field can just be sent to false which minimal response like the one below

"allowed": false,
"status": {
  "code": 400,
  "message": "Missing required label: team"
}

Kubernetes secret resource

Kubernetes includes a secret resource store.

We can get a list of secrets via the namespace

kubectl get secrets -n dev

and for all namespaces using

kubectl get secrets --all-namespaces

We can create a secret of the specified type

  • docker-registry Create a secret for use with a container registry
  • generic Create a secret from a local file, directory, or literal value, known as an Opaque secret type
  • tls Create a TLS secret, such as a TLS certificate and its associated key

Hence we use the “specified type” as below (which uses a generic type)

kubectl create secret generic my-secret \
  --from-literal=username=admin \
  --from-literal=password=secret123 \
  -n dev

With the above command, we created a secret with the name my-secret and the key username with value admin followed by another key/value.

A secret can be created using Kubernetes YAML file with kind “Secret”

apiVersion: v1
kind: Secret
metadata:
  name: my-secret
type: Opaque
data:
  username: YWRtaW4=       # base64 encoded 'admin'
  password: c2VjcmV0MTIz   # base64 encoded 'secret123'

Accessing secrets, we can use the following

kubectl get secret my-secret -o jsonpath="{.data.username}" -n dev | base64 --decode
kubectl get secret my-secret -o jsonpath="{.data.username}" -n dev
[System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String("YWRtaW4=")) // insert string from the above

Or using Powershell

$encoded = kubectl get secret my-secret -o jsonpath="{.data.username}" -n dev
[System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String($encoded))

Here’s an example using a secret by including them in environment varianles

env:
  - name: DB_USER
    valueFrom:
      secretKeyRef:
        name: my-secret
        key: username

this gives us process.env.DB_USER.

Another use is mounting via the pods volume, hence it’s file system

volumes:
  - name: secret-volume
    secret:
      secretName: my-secret

volumeMounts:
  - name: secret-volume
    mountPath: "/etc/secret"
    readOnly: true

Let’s create a chatbot/agent using Azure AI Foundry and Semantic Kernel with C#

Setting up a project and model in AI Foundry

Let’s start by creating a project in https://ai.azure.com/

Note: I’m going to create a very simple, pretty standard chatbot for a pizza delivery service, so my project is going be called pizza, so you’re see this in the code but ofcourse replace with your preferred example or real code as this is the same setup that you’ll do for your own chatbot anyway.

  • Navigate to https://ai.azure.com/
  • Click Create new (if not available go to the Management Center | All Resources and the option should be there)
  • Select the Azure AI Foundry resource, then click Next
  • Supply a project name, resource group (or create one) and region – I left this as Sweden Central as I’m sure I read that it was a little more advanced than some regions, but do pick one which suites.
  • Click Create

Once completed you’ll be presented with the project page.

We’re not quite done as we need to deploy a model…

  • From the left hand nav. bar, locate My assets and click on Models + endpoints.
  • Click + Deploy model
  • Select Deploy base model from the drop down
  • From the Select a model popup, choose a mode, I’ve selected gpt-4o-mini which is a good model for chat completion.
  • Click Confirm
  • Give it a Deployment name and I’ve using the Deployment type as Standard and leaving all the new fields that appear as default
  • Click Deploy to assign the model to the project

We should now see some source code samples listed. We’ll partially be using in the code part of this, but before we move on we need an endpoint and an api key.

  • From this page on the Details tab copy the Endpoint Target URI – but we don’t need the whole this, from the project overview we can get the cutdown version but it’s basically this https://{your project}.cognitiveservices.azure.com/
  • From below the Target URI copy the Key

Writing the code

Create a Console application using Visual Studio.

Let’s begin be adding the following NuGet packages

dotnet add package Microsoft.SemanticKernel
dotnet add package Microsoft.Extensions.Configuration
dotnet add package Microsoft.Extensions.Configuration.Json

We’re using (as you can see) Semantic Kernel, now the versions seem to change pretty quickly at the moment so hopefully the code below will work but if not check against the version you’re using. For completeness here’s my versions

<ItemGroup>
  <PackageReference Include="Microsoft.Extensions.Configuration" Version="9.0.10" />
  <PackageReference Include="Microsoft.Extensions.Configuration.Json" Version="9.0.10" />
  <PackageReference Include="Microsoft.SemanticKernel" Version="1.66.0" />
</ItemGroup>

Create yourself an appsettings.json file, which should look like this

{
  "AI": {
    "Endpoint": "<The Endpoint>",
    "ApiKey": "<Api Key>",
    "ApiVersion": "2024-12-01-preview",
    "DeploymentName":  "pizza"
  }
}

Obviously you’ll need to supply your endpoint and API key that we copied after creating our AI Foundry project.

Now before we go onto look at implementing the Program.cs… I’m wanting this LLM to use some custom functions to fulfil a couple of tasks such as returning the menu and placing and order.

The AI Foundry project is an LLM which is our chat bot and it can answer questions and also generate hallucinations etc. For example without supplying my functions it will try to create a pizza menu for me, but that’s not a lot of use to our pizza place.

What I want is the Natural Language Processing (NLP) as well as the model’s “knowledge” to work with my functions – we implement this using Plugins.

What I want to happens is this

  • The customer connects to the chatbot/LLM
  • The customer then asks to either order a Pizza or for information on what Pizza’s we make, i.e. the menu
  • The LLM then needs to pass information to the PizzaPlugin which then returns information to the LLM to respond to the customer

Our PizzaPlugin is a standard C# class and we’re going to keep things simple, but you can imagine that this could call into a database or whatever you like to to get a menu and place an order.

public class PizzaPlugin
{
    [KernelFunction]
    [Description("Use this function to list the pizza's a customer can order")]
    public string ListMenu() => "We offer Meaty Feasty, Pepperoni, Veggie, and Cheese pizzas.";

    [KernelFunction]
    public string PlaceOrder(string pizzaType)
        => $"Order placed for: {pizzaType}. It will be delivered in 30 minutes.";
}

The KernelFunctionAttribute is registered/added to the Semantic Kernal to supply callable plugin functions. The DescriptionAttribute is optional, but recommended if you want the LLM to understand what the function does during auto function calling (which we will be using). I’ve left the other function without this DescriptionAttribute just to demonstrate it’s not required in this case, yet our function will/should still be called. If we have many similar functions this would be a helpful addition.

Note: Try to also function names that are clearly stating their usage, i.e. use action oriented naming.

Now let’s implement the Program.cs where we’ll, read in our configuration from the appsettings.json and then create the Semantic Kernel, add the Azure Open AI Chat services, add the plugin we just created then call into the AI Foundry LLM model we created earlier.

We’re NOT going create all the code for an actual console based chat app, hence we’ll just predefine the “chat” part with a ChatHistory object. In a real world you may wish to keep track of the chat history.

using Microsoft.Extensions.Configuration;
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;
using OpenAI.Chat;
using SemanticKernelTest.PizzaPlugin;

var config = new ConfigurationBuilder()
    .AddJsonFile("appsettings.json")
    .Build();

var endpoint = config["AI:Endpoint"];
var apiKey = config["AI:ApiKey"];
var apiVersion = config["AI:ApiVersion"];
var deploymentName = config["AI:DeploymentName"];

var builder = Kernel.CreateBuilder();

builder.AddAzureOpenAIChatCompletion(
    deploymentName: deploymentName,
    endpoint: endpoint,
    apiKey: apiKey,
    apiVersion: apiVersion
);

var kernel = builder.Build();

var plugin = new PizzaPlugin();
kernel.Plugins.AddFromObject(plugin);

var chatCompletion = kernel.GetRequiredService<IChatCompletionService>();

var chatHistory = new ChatHistory();
chatHistory.AddAssistantMessage("How can I help you?");
chatHistory.AddUserMessage("Can I order a plan Pepperoni pizza?");

var result = await chatCompletion.GetChatMessageContentAsync(chatHistory, new PromptExecutionSettings
{
    FunctionChoiceBehavior = FunctionChoiceBehavior.Auto()
}, kernel);

Console.WriteLine(result.Content);

Before you run this code, place breakpoints on the Kernel Functions in the plugin and then run the code. Hopefully all run’s ok and you’ll notice that the LLM (via Semantic Kernel) calls into the plugin methods. As you’ll hopefully see – it calls the menu to check whether the pizza supplied is one we make then orders it, if it does exist. Change the pizza to one we do not make (for example Chicken) and watch the process and output.

More settings

In the code above we’re using the PromptExecutionSettings but we can also use OpenAIPromptExecutionSettings instead, from this we can configure Open AI by setting the Temperature, MaxTokens and others, for example

var result = await chatCompletion.GetChatMessageContentAsync(chatHistory, new OpenAIPromptExecutionSettings
{
  Temperature = 0.7,
  FunctionChoiceBehavior = FunctionChoiceBehavior.Auto(),
  MaxTokens = 100
}, kernel);

These options are also settable in the AI Foundry. Temperature controls the randomness of the model, for example a lower value is more deterministic whereas the higher is more random the results are, the default is 1.0.

  • 0.2-0.5 is more deterministic and produces more focused outputs
  • 0.8-1.0 allows for more diverse and creative responses

Creating kubectl plugins

To create a kubectl plugin whereby, for example, we could rung a new tool like this

kubectl log index echo 3 -n dev

Where the above would find pods with a partial name of echo and from those pods that match, finds the index 3 (0 indexed).

To create a plugin you use the naming convention

kubeclt-<your-plugin-name>

You need to build your plugin then ensure it’s copied into your PATH.

Once built and copied, you can use the following to check if kubectl can find the plugin

kubectl plugin list

Sample Plugin

I’ve created the plugin using Rust.

Note: This is just a quick implementation and not fully tested, but gives an idea of how to create such a plugin.

Set your Cargo.toml dependencies as follows

k8s-openapi = { version = "0.26.0", features = ["v1_32"] }
kube = { version = "2.0.1", features = ["runtime", "derive"] }
tokio = { version = "1", features = ["full"] }
clap = { version = "4", features = ["derive"] }
anyhow = "1.0"

Next we want to create the command line arguments using the following

#[derive(Parser, Debug)]
#[command(name = "kubectl-log-index")]
#[command(author, version, about)]
pub struct Args {
    /// Partial name of the pod to match
    #[arg()]
    pub pod_part: String,
    /// Index of the pod (0-based)
    pub index: usize,
    /// Follow the log stream
    #[arg(short = 'f', long)]
    pub follow: bool,
    /// Kubernetes namespace (optional)
    #[arg(short, long)]
    pub namespace: Option<String>,
}

We’re supplying some short form parameters such as -f which can be used instead of –follow, likewise -n in place of –namespace.

Our main.rs looks like this

mod args;

use clap::Parser;
use anyhow::Result;
use kube::{Api, Client};
use k8s_openapi::api::core::v1::Pod;
use std::process::Command;
use kube::api::ListParams;
use kube::runtime::reflector::Lookup;
use crate::args::Args;

/// kubectl plugin to get logs by container index
#[tokio::main]
async fn main() -> Result<()> {
    let args = Args::parse();

    let namespace: &str = args.namespace
        .as_deref()
        .unwrap_or("default");

    let client = Client::try_default().await?;
    let pods: Api<Pod> = Api::namespaced(client, namespace);

    let pod_list = find_matching_pods(pods, &args.pod_part).await.expect("Failed to find matching pods");
    
    let pod = pod_list
        .get(args.index)
        .cloned()
        .ok_or_else(|| anyhow::anyhow!("Pod not found"))?;

    let pod_name = &pod.name().ok_or_else(|| anyhow::anyhow!("Pod name not found"))?;

    let mut cmd = Command::new("kubectl");

    cmd.args(["logs", pod_name]);

    if namespace != "default" {
        cmd.args(["-n", namespace]);
    }

    if args.follow {
        cmd.arg("-f");
    }

    cmd
        .status()?;

    Ok(())
}

pub async fn find_matching_pods(
    pods: Api<Pod>,
    partial: &str,
) -> Result<Vec<Pod>, Box<dyn std::error::Error>> {
    let pod_list = pods.list(&ListParams::default()).await?;

    let matches: Vec<Pod> = pod_list.items
        .into_iter()
        .filter(|pod| {
            pod.metadata.name
                .as_ref()
                .map(|name| name.contains(partial))
                .unwrap_or(false)
        })
        .collect();

    Ok(matches)
}

How to resolve “Pipeline does not have permissions to use the referenced service connection(s)”

I’ve been caught out by this before and then have to remind myself how to fix it.

You have an Azure Devops pipeline (mine’s a YAML based pipeline). You run the pipeline and get the error message “Pipeline does not have permissions to use the referenced service connection(s) XXX” (where XX is your SPN etc.).

This is simple to fix by correctly configuring your security.

  • Go to Project Settings | Service Connections within Azure Devops
  • Filter or otherwise find the SPN that you need to add the permission to and click it
  • In the top-right corner, click the vertical “…” (kebab menu) and select the Security option
  • Scroll down to the pipeline permissions and add your pipeline to the list of permissioned pipelines using the + button