Monthly Archives: August 2025

Creating audit history for your table(s) in SQL Server

We’re wanting to capture audit history on our table(s) in our database. A good way to do this is via triggers.

Let’s create ourselves an Audit table first, something like this

CREATE TABLE [dbo].[Audit](
  [Id] [int] IDENTITY(1,1) NOT NULL,
  [Operation] [nvarchar](3) NOT NULL,
  [DateTime] [datetime] NOT NULL,
  [Before] [nvarchar](256) NULL,
  [After] [nvarchar](256) NULL,
) ON [PRIMARY]

The Operation will be used to store a value from “UPD”, “DEL” and “INS”. In other words the audit record is an Update, Delete or Insert. The DateTime is the date and time the operation took place, Before is the data before it’s changed, After the data it changed to and a Description field for a bit more information.

I’ve not included this field, but most likely we’ll also want to have a Who field for who made the change.

In my little example I have a table named Scale which is used to store musical scales. Just for completeness let’s take a look at the Scale table

CREATE TABLE [dbo].[Scale](
  [Id] [int] IDENTITY(1,1) NOT NULL,
  [Name] [nvarchar](256) NOT NULL,
  [Intervals] [nvarchar](256) NOT NULL,
  [Description] [nvarchar](256) NULL
) ON [PRIMARY]

Now I’m just going to add triggers to this one table, however you might have such triggers on multiple tables. We will want to have a trigger for deletion, insertion and updates.

For the delete trigger we might use something like this

ALTER TRIGGER [dbo].[ScaleDelete]
ON [dbo].[Scale]
FOR DELETE
AS
  INSERT INTO dbo.Audit(Operation, DateTime, Before)
  SELECT 'DEL', GETDATE(), (select d.Name, d.Description, d.Intervals from deleted d FOR JSON PATH)
  FROM deleted

For a deletion we insert the operation, date/time and a description into the audit table using the deleted data.

For insertions we might have something like this

ALTER TRIGGER [dbo].[ScaleInsert] 
ON [dbo].[Scale]
FOR INSERT
AS
  INSERT INTO dbo.Audit(Operation, DateTime, After)
  SELECT 'INS', GETDATE(), (select i.Name, i.Description, i.Intervals from inserted i FOR JSON PATH)
  FROM inserted;

In this case we take the data from the insert data and passing parts of this data to the audit table (similar to the deletion).

For the update trigger we might write something similar to the above, but any time the UPDATE SQL command is called the audit log will be added to, even if there were not changes. It would, therefore, be nice if we only updated the audit table when real changes are detected.

In the case of an update we get both inserted and deleted data. So we can check whether the inserted differs from deleted using the EXCEPT SQL command. Then, only if a difference has been found, will we update the audit table. For fun, we’re also going to turn the deleted and inserted data into JSON and store the whole row to in the Before and After fields of the audit table.

CREATE TRIGGER [dbo].[ScaleUpdate]
ON [dbo].[Scale]
AFTER UPDATE
AS
IF EXISTS (
  SELECT Name, Description, Intervals FROM inserted
  EXCEPT
  SELECT Name, Description, Intervals FROM deleted
)
BEGIN
  INSERT INTO dbo.Audit(Operation, DateTime, Before, After)
  SELECT 'UPD', GETDATE(), 
  (select d.Name, d.Description, d.Intervals from deleted d FOR JSON PATH), 
  (select i.Name, i.Description, i.Intervals from inserted i FOR JSON PATH)
  FROM inserted i
  join deleted d on (i.Id = d.Id)
END

If you’d prefer to create XML output you can replace JSON PATH to XML RAW.

Now to test these we can use

DELETE from Scale where Name = 'Chromatic'

INSERT INTO Scale (Name, Intervals)
VALUES ('Chromatic', '1,1,1,1,1,1,1,1,1,1,1,1')

UPDATE Scale
SET Intervals = '1,1,1,1,1,1,1,1,1,1,1,2'
WHERE Name = 'Chromatic'

Output caching in ASP.NET core

Output caching can be used on your endpoints so that if the same request comes into your endpoint within an “cache expiry” time then the endpoint will not get called and the stored/cached response from a previous call will be returned.

To make that clearer the endpoint’s method will NOT be called, the response is cached and hence returned via the ASP.NET middleware.

This is particularly useful for static or semi-static context.

Out of the box, the OutputCache can handle caching for different query parameters and routes can be easily set up to handle caching.

I’m going to setup the output cache to use Redis

builder.AddRedisOutputCache("cache");

builder.Services.AddOutputCache(options =>
{
    options.AddPolicy("ShortCache", builder => builder.Expire(TimeSpan.FromSeconds(10)));
});

Now from a minimal API endpoint we can apply output caching using CacheOutput as below

app.MapGet("/cached/{id}", (int id) => new
    {
        Message = $"Output Cache {id}",
        Timestamp = DateTime.UtcNow
    })
    .CacheOutput(c => c.VaryByValue(
        ctx => new KeyValuePair<string, string>("id", ctx.Request.RouteValues["id"]?.ToString() ?? string.Empty)));

Each unique id gets its own cached response.

The endpoint is only executed once per id within the cache duration.
– The VaryByValue method lets you define custom cache keys based on route values, headers, or query strings.
– Without this, /cache/1 and /cache/2 might share a cache entry or overwrite each other depending on the default key behavior.

app.MapGet("/cached-query", (int id) => new
    {
        Message = $"Output Cache {id}",
        Timestamp = DateTime.UtcNow
    })
    .CacheOutput();
app.UseOutputCache();

IIS in Docker

With the ability to use Windows based Docker images we can now deploy an IIS Docker image to host our sites (and it’s pretty simple)

Would you want to use Windows and IIS? Possibly not but this post is a primer for deploying ASP.NET framework apps. to Docker, so just shows what’s possible.

Let’s jump start in and look at the Dockerfile

FROM mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019 

COPY wwwroot /inetpub/wwwroot/

This assumes that your HTML etc. reside in a folder named wwwroot off of the folder containing the Dockerfile. You can replace COPY with ADD if preferred to get the capability of extracting compressed files as part of the copy.

Now build this using

docker build -t ns/mysite:0.1.0 .

Change ns/mysite:0.1.0 to your preferred tag name. Next run using

docker run --name mysite -p 80:80 -d ns/mysite:0.1.0

Now from your browser go to http://localhost and access the HTML pages etc.

Investigating pod resources and usage

Top

kubectl top pod
// Or we could use labels, for example app=ui, app=proxy etc.
kubectl top pod -l 'app in (ui, proxy, api)' -n my-namespace

Check the pods configuration

kubectl describe pod <pod-name> | grep -A5 "Limits"

Prints the five lines after the “Limits” section, for example

Limits:
  cpu:     500m
  memory:  1Gi
Requests:
  cpu:      50m
  memory:   256Mi

Resource Quotas

kubectl get resourcequotas
kubectl get resourcequotas -n my-namesapce
kubectl describe resourcequota {name from above call} -n my-namespace

CPU Throttling

kubectl exec <pod-name> -- cat /sys/fs/cgroup/cpu.stat
kubectl exec <pod-name> -- cat /sys/fs/cgroup/cpu/cpu.stat

For example

usage_usec 177631637
user_usec 89639616
system_usec 87992020
nr_periods 191754
nr_throttled 271
throttled_usec 11291159

– nr_periods – The number of scheduling periods that have occurred.
– nr_throttled – The number of times the process was throttled due to exceeding CPU limits.

Kubernetes port forwarding

We might deploy a something to a pod which doesn’t have an external interface or we just want to debug our deployed pod without going through load balancers etc. Kubernetes allows us to essentially connect and redirect a pod via it’s port, so for example I might have pod name “my-pod” on port 5000 within Kubernetes. I want to access this via curl or a browser or whatever.

Hence we use the following command

kubectl port-forward pod/my-pod 8080:5000

and now to access the application running in this pod using something like this

curl localhost:8080

Rust, postfix “?”

Let’s assume we have a function such as and we have the line highlight ending in a “?” – what’s this doing?

fn get_history() -> Result<Vec<Revision>, String> {
   let revisions: Vec<Revision> = get_revisions()?;
   return Ok(revisions)
}

We can see that the return is a Result – which is an enum that essentially looks like this

enum Result<T, E> {
    Ok(T),
    Err(E),
}

Hence our get_history function can return a Vec<Revision> which might me Ok (for success ofcourse) or an Err (for an error).

Okay, so what’s the highlighted code doing, especially as we only appear to return an Ok?

This is essentially is the same as the following

let revisions = match get_revisions() {
  Ok(val) => val,
  Err(e) => return Err(e)
};

As we can see this is a nice bit of semantic sugar to return an error from the function OR assign the Ok result to the revisions variable.

Pod disruption budgets in Kubernetes

The PodDisruptionBudget kind (or PDB) is used to configure the availability of voluntary disruptions.

To give a little more detail, this is a policy that, for example, limits how many pods can be disrupted at once, this ensure a minimum number of pods remain available during operations such as node upgrade, autoscaling or voluntary evictions. This is a way to ensure serving capacity remains at a given level during upgrades etc.

Here’s an example yaml file for this

apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: echo-pdb
  namespace: dev
spec:
  minAvailable: 1  # At least one pod must be available OR use maxUnavailable: 1 for maximum which can be unavailable
  selector:
    matchLabels:
      app: echo

In this example we use the “minAvailable”, you could use “maxUnavailable” but not both.

Kubernetes Jobs (one off tasks)

In the last post I create a simple application to be used within a schedule, i.e. a CronJob in Kubernetes.

We can also create one off tasks (or a Job) which might be used to migrations or some batch processing. We’ve going to use everything from the previous post to build, containerize and push our imager to a container registry. The only change is to use the supplied job.yaml file, listed below

apiVersion: batch/v1
kind: Job
metadata:
  name: one-time-job
  namespace: dev
spec:
  template:
    spec:
      containers:
      - name: one-time-jobb
        image: putridparrotreg/putridparrot/crj:1.0.0
      restartPolicy: Never

Running the following “kubectl get jobs -n dev” results in something like this

NAME           STATUS     COMPLETIONS   DURATION   AGE
one-time-job   Complete   1/1           5s         41s

and if we get the pods using “kubectl get jobs -n dev” we get something like

NAME           STATUS     COMPLETIONS   DURATION   AGE
one-time-job   Complete   1/1           5s         83s

and if we check the pods with “kubectl get pods -n dev” we’ll see something like this

NAME                 READY   STATUS      RESTARTS   AGE
one-time-job-h5dvf   0/1     Completed   0          3m23s

and ofcourse we can see the logs of this run via “kubectl logs one-time-job-h5dvf -n dev” and we get our application output, i.e. the date/time it was run

Current date and time: 2025-08-17 15:39:53.114962479 +00:00

You’ll note that the pod remained in the cluster, this allowed us to view the logs etc. and it’s down to the developer/devops to delete the job and pod unless…

We can actually set up automate deletion of the pod using the “ttlSecondsAfterFinished” option in the yaml file, i.e.

apiVersion: batch/v1
kind: Job
metadata:
  name: one-time-job
  namespace: dev
spec:
  ttlSecondsAfterFinished: 300  # Deletes Job and its Pods 5 minutes after completion
  template:
    spec:
      containers:
      - name: one-time-jobb
        image: putridparrotreg/putridparrot/crj:1.0.0
      restartPolicy: Never

We also have the option of “activeDeadlineSeconds”, this does not delete or clean up anything but it can be used in the “spec:” section like “ttlSecondsAfterFinished” to denote that the job will be killed off it not finished. So for example

apiVersion: batch/v1
kind: Job
metadata:
  name: one-time-job
  namespace: dev
spec:
  ttlSecondsAfterFinished: 300  # Deletes Job and its Pods 5 minutes after completion
  activeDeadlineSeconds: 600 # Job will be killed even if not finished, in 10 minutes
  template:
    spec:
      containers:
      - name: one-time-jobb
        image: putridparrotreg/putridparrot/crj:1.0.0
      restartPolicy: Never

Kubernetes cronjobs

You know the scenario, you’re wanting to run jobs either at certain points in a day or throughout the data every N timespans (i.e. every 5 mins).

Kubernetes has you covered, there’s a specific “kind” of job for this, as you guessed from the title, the CronJob.

An example app.

Let’s assume you created yourself a job – I’m going to create a simple job that just outputs the date/time at the scheduled time. I’ve written this in Rust but to be honest it’s simple enough that this could be any language. Here’s the Cargo.toml

The application is just a standard console application named crj (for cronjob or cron rust job, I really didn’t think about it :)).

[package]
name = "crj"
version = "0.1.0"
edition = "2024"

[dependencies]
chrono = "0.4"

Here’s the code

use chrono::Local;

fn main() {
    let now = Local::now();
    println!("Current date and time: {}", now);
}

See I told you it was simple.

Docker

For completeness, here’s the Dockerfile and the steps to get things built, tagged and pushed

FROM rust:1.89.0-slim AS builder

WORKDIR /app
COPY . .

RUN cargo build --release

FROM debian:bookworm-slim

RUN apt-get update && apt-get install -y ca-certificates && \
    rm -rf /var/lib/apt/lists/*

COPY --from=builder /app/target/release /usr/local/bin/crj

RUN chmod +x /usr/local/bin/crj

ENTRYPOINT ["/usr/local/bin/crj/crj"]

Next up we need to build the image using (remember to use the image you created as well as the correct name for your container registry)

docker build -t putridparrot/crj:1.0.0 .

then tag it using

docker tag putridparrot/crj:1.0.0 putridparrotreg/putridparrot/crj:1.0.0

Finally we’ll push it to our container registry using

docker push putridparrotreg/putridparrot/crj:1.0.0

Kubernetes CronJob

All pretty standard stuff and to be honest the next bit is simple enough. We need to create a kubernetes yaml file (or helm charts). Here’s my cronjob.yaml

apiVersion: batch/v1
kind: CronJob
metadata:
  name: scheduled-job
  namespace: dev
spec:
  schedule: "*/5 * * * *" # every 5 minutes
  jobTemplate:
    spec:
      template:
        spec:
          containers:
            - name: scheduled-job
              image:  putridparrotreg/putridparrot/crj:1.0.0
          restartPolicy: Never

My cronjob has the name scheduled-job (I know, not very imaginative). We apply this file to Kubernetes as usual i.e.

kubectl apply -f .\cronjob.yaml

Did it work?

We’ll ofcourse want to take a look at what happened after this CronJob was set up in Kubernetes. We can simply use the following. You can set the namespace used, such as dev in my case.

kubectl get cronjobs --all-namespaces -w

you’ll see something like this

NAMESPACE   NAME            SCHEDULE      TIMEZONE   SUSPEND   ACTIVE   LAST SCHEDULE   AGE
dev         scheduled-job   */5 * * * *   <none>     False     0        <none>          9s
dev         scheduled-job   */5 * * * *   <none>     False     1        0s              16s
dev         scheduled-job   */5 * * * *   <none>     False     0        13s             29s
dev         scheduled-job   */5 * * * *   <none>     False     1        0s              5m16s

In my case the job starts (ACTIVE) and then completes and shuts down. Then 5 minutes later it starts again as expected with this cron schedule.

On the pods side you can run

kubectl get pods -n dev -w

Now what you’ll see is something like this

NAME                           READY   STATUS              RESTARTS   AGE
scheduled-job-29257380-5w4rg   0/1     Completed           0          51s
scheduled-job-29257385-qgml2   0/1     Pending             0          0s
scheduled-job-29257385-qgml2   0/1     Pending             0          0s
scheduled-job-29257385-qgml2   0/1     ContainerCreating   0          0s
scheduled-job-29257385-qgml2   1/1     Running             0          2s
scheduled-job-29257385-qgml2   0/1     Completed           0          3s
scheduled-job-29257385-qgml2   0/1     Completed           0          5s
scheduled-job-29257385-qgml2   0/1     Completed           0          5s
scheduled-job-29257390-2x98r   0/1     Pending             0          0s
scheduled-job-29257390-2x98r   0/1     Pending             0          0s
scheduled-job-29257390-2x98r   0/1     ContainerCreating   0          0s
scheduled-job-29257390-2x98r   1/1     Running             0          2s

Notice that the pod is created and goes into a “Pending” state. Then “ContainerCreating” before “Running” and finally “Completed”, but the next run of the cronjob creates a new pod name. Therefore, if you’re trying to log the pods i.e. kubectl logs scheduled-job-29257380-5w4rg -n dev – then you’ll get something like the below, but you cannot -f (follow) the logs as the next time the job runs it creates a new pod.

Current date and time: 2025-08-17 15:00:09.294317303 +00:00