Category Archives: C#

Scheduled Azure Devops pipelines

I wanted to run some tasks once a day. The idea being we run application to check from any drift/changes to configuration etc. Luckily this is simple in Azure devops.

We create a YAML pipeline with no trigger and create a cronjob style schedule instead as below

trigger: none

schedules:
- cron: "0 7 * * *"
  displayName: Daily
  branches:
    include:
    - main
  always: true

pool:
  vmImage: 'ubuntu-latest'

steps:
- task: UseDotNet@2
  inputs:
    packageType: 'sdk'
    version: '10.x'

- script: dotnet build ./tools/TestDrift/TestDrift.csproj -c Release
  displayName: Test for drift

- script: |
    dotnet ./tools/TestDrift/bin/Release/net10.0/TestDrift.dll
  displayName: Run Test for drift

- task: PublishTestResults@1
  inputs:
    testResultsFormat: 'JUnit'
    testResultsFiles: ./tools/TestDrift/bin/Release/net10.0/drift-results.xml
    failTaskOnFailedTests: true

In this example we’re publishing test results. Azure devops supports several formats, see the testResultsFormat variable. We’re just creating an XML file named drift-results.xml with the following format


<testsuite tests="0" failures="0">
  <testcase name="check site" />
  <testcase name="check pipeline">
    <failure message="pipeline check failed" />
  </testcase>
</testsuite>

In C# we’d do something like

var suite = new XElement("testsuite");
var total = GetTotalTests();
var failures = 0;

var testCase = new XElement("testcase",
   new XAttribute("name", "check pipeline")
);

// run some test
var success = RunSomeTest();

if(!success)
{
  failures++;
  testCase.Add(new XElement("failure",
    new XAttribute("message", "Some test name")
  ));
}

suite.Add(testCase);

// completed
suite.SetAttributeValue("tests", total);
suite.SetAttributeValue("failures", failures);

var exeDir = Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location)!;
var outputPath = Path.Combine(exeDir, "tls-results.xml");

File.WriteAllText(outputPath, suite.ToString());

Using one of the valid formats, such as the JUnit format, will also result in Azure pipeline build showing a Test tab with our test results listed.

Different ways of working with the HttpClient

A few years back I wrote the post .NET HttpClient – the correct way to use it.

I wanted to extend this discussion to the other ways of using/instantiating your HttpClient.

We’ll look at this from the view of the way we’d usually configure things for ASP.NET but these are not limited to ASP.NET.

IHttpClientFactory

Instead of passing an HttpClient to a class (such as a controller) we might prefer to use the IHttpClientFactory. This allows us to inject the factory and create an instance of an HttpClient using the method CreateClient, for example in our Program.cs

builder.Services.AddHttpClient();

then in our code which uses the IHttpClientFactory

public class ExternalService(IHttpClientFactory httpClientFactory)
{
   public Task LoadData()
   {
      var httpClient = httpClientFactory.CreateClient();
      // use the httpClient as usual
   }
}

This might not seem that much of an advancement from passing around HttpClient’s.

Where this is really useful is in allowing us to configure our HttpClient, such as base address, timeouts etc. In this situation we can use “named” clients. We’d assign a name to the client such as

builder.Services.AddHttpClient("external", client => {
  client.BaseAddress = new Uri("https://some_url");
  client.Timeout = TimeSpan.FromMinutes(3);
});

Now in usage we’d write the following

var httpClient = httpClientFactory.CreateClient("external");

We can now configured multiple clients with different names for use in different scenarios. With can also add policy and message handlers, for example

builder.Services.AddHttpClient("external", client => {
  // configuration for the client
}).
AddHttpMessageHandler<SomeMessageHandler>()
.AddPolicyHandler(SomePolicy());

Strongly Typed Client

Typed or Strongly Typed clients is another way of using the HttpClient, weirdly this looks much more like our old way of passing HttpClient’s around.

We create a class specific to the HttpClient usage, and have an HttpClient parameter on the constructor, for example

public class ExternalHttpClient : IExternalHttpClient
{
  private readonly HttpClient _httpClient;

  public ExternalHttpClient(HttpClient httpClient)
  {
    _httpClient = httpClient;
    _httpClient.BaseAddress = new Uri("https://some_url");
    _httpClient.Timeout = TimeSpan.FromMinutes(3);
  }

  public Task<SomeData> GetDataAsync()
  {
     // use _hhtpClient as usual
  }
}

We’d now need to add the client to the dependency injection in Program.cs using

builde.Services.AddHttpClient<IExternalHttpClient, ExternalHttpClient>();

Conclusions

The first question might be, why use strongly typed HttpClient’s over IHttpClientFactory. The most obvious response is that it gives a clean design, i.e. we don’t use “magic strings” we know which client does what as it includes the methods to call the endpoints for the developer. Essentially it encapsulates our HttpClient usage for a specific endpoint. It also gives us a cleaner way of testing our code by allowing us to mock the interface only (not having to mock an IHttpClientFactory etc.).

However the IHttpClientFactory way of working gives us central place where we’d generally have all our clients declared and configured, named clients allow us to switch between clients easily using the name, it also gives great integrations for things like Polly.

Using Garnet

Garnet is a Redis (RESP) compatible cache from Microsoft Research, it’s used internally within Microsoft but as it’s a research project it’s possible the design etc. will change/evolve.

Not only is is Redis compatible, it’s written in C#, so ideal for .NET native environments. Check out the Garnet website for more information

I’ve shown code to interact from C#/.NET to Redis in the past, the same code will work with Garnet.

Here’s a Dockerfile to create an instance of Garnet

services:
  garnet:
    image: 'ghcr.io/microsoft/garnet'
    ulimits:
      memlock: -1
    container_name: garnet
    ports:
      - "6379:6379"

Let’s create a chatbot/agent using Azure AI Foundry and Semantic Kernel with C#

Setting up a project and model in AI Foundry

Let’s start by creating a project in https://ai.azure.com/

Note: I’m going to create a very simple, pretty standard chatbot for a pizza delivery service, so my project is going be called pizza, so you’re see this in the code but ofcourse replace with your preferred example or real code as this is the same setup that you’ll do for your own chatbot anyway.

  • Navigate to https://ai.azure.com/
  • Click Create new (if not available go to the Management Center | All Resources and the option should be there)
  • Select the Azure AI Foundry resource, then click Next
  • Supply a project name, resource group (or create one) and region – I left this as Sweden Central as I’m sure I read that it was a little more advanced than some regions, but do pick one which suites.
  • Click Create

Once completed you’ll be presented with the project page.

We’re not quite done as we need to deploy a model…

  • From the left hand nav. bar, locate My assets and click on Models + endpoints.
  • Click + Deploy model
  • Select Deploy base model from the drop down
  • From the Select a model popup, choose a mode, I’ve selected gpt-4o-mini which is a good model for chat completion.
  • Click Confirm
  • Give it a Deployment name and I’ve using the Deployment type as Standard and leaving all the new fields that appear as default
  • Click Deploy to assign the model to the project

We should now see some source code samples listed. We’ll partially be using in the code part of this, but before we move on we need an endpoint and an api key.

  • From this page on the Details tab copy the Endpoint Target URI – but we don’t need the whole this, from the project overview we can get the cutdown version but it’s basically this https://{your project}.cognitiveservices.azure.com/
  • From below the Target URI copy the Key

Writing the code

Create a Console application using Visual Studio.

Let’s begin be adding the following NuGet packages

dotnet add package Microsoft.SemanticKernel
dotnet add package Microsoft.Extensions.Configuration
dotnet add package Microsoft.Extensions.Configuration.Json

We’re using (as you can see) Semantic Kernel, now the versions seem to change pretty quickly at the moment so hopefully the code below will work but if not check against the version you’re using. For completeness here’s my versions

<ItemGroup>
  <PackageReference Include="Microsoft.Extensions.Configuration" Version="9.0.10" />
  <PackageReference Include="Microsoft.Extensions.Configuration.Json" Version="9.0.10" />
  <PackageReference Include="Microsoft.SemanticKernel" Version="1.66.0" />
</ItemGroup>

Create yourself an appsettings.json file, which should look like this

{
  "AI": {
    "Endpoint": "<The Endpoint>",
    "ApiKey": "<Api Key>",
    "ApiVersion": "2024-12-01-preview",
    "DeploymentName":  "pizza"
  }
}

Obviously you’ll need to supply your endpoint and API key that we copied after creating our AI Foundry project.

Now before we go onto look at implementing the Program.cs… I’m wanting this LLM to use some custom functions to fulfil a couple of tasks such as returning the menu and placing and order.

The AI Foundry project is an LLM which is our chat bot and it can answer questions and also generate hallucinations etc. For example without supplying my functions it will try to create a pizza menu for me, but that’s not a lot of use to our pizza place.

What I want is the Natural Language Processing (NLP) as well as the model’s “knowledge” to work with my functions – we implement this using Plugins.

What I want to happens is this

  • The customer connects to the chatbot/LLM
  • The customer then asks to either order a Pizza or for information on what Pizza’s we make, i.e. the menu
  • The LLM then needs to pass information to the PizzaPlugin which then returns information to the LLM to respond to the customer

Our PizzaPlugin is a standard C# class and we’re going to keep things simple, but you can imagine that this could call into a database or whatever you like to to get a menu and place an order.

public class PizzaPlugin
{
    [KernelFunction]
    [Description("Use this function to list the pizza's a customer can order")]
    public string ListMenu() => "We offer Meaty Feasty, Pepperoni, Veggie, and Cheese pizzas.";

    [KernelFunction]
    public string PlaceOrder(string pizzaType)
        => $"Order placed for: {pizzaType}. It will be delivered in 30 minutes.";
}

The KernelFunctionAttribute is registered/added to the Semantic Kernal to supply callable plugin functions. The DescriptionAttribute is optional, but recommended if you want the LLM to understand what the function does during auto function calling (which we will be using). I’ve left the other function without this DescriptionAttribute just to demonstrate it’s not required in this case, yet our function will/should still be called. If we have many similar functions this would be a helpful addition.

Note: Try to also function names that are clearly stating their usage, i.e. use action oriented naming.

Now let’s implement the Program.cs where we’ll, read in our configuration from the appsettings.json and then create the Semantic Kernel, add the Azure Open AI Chat services, add the plugin we just created then call into the AI Foundry LLM model we created earlier.

We’re NOT going create all the code for an actual console based chat app, hence we’ll just predefine the “chat” part with a ChatHistory object. In a real world you may wish to keep track of the chat history.

using Microsoft.Extensions.Configuration;
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;
using OpenAI.Chat;
using SemanticKernelTest.PizzaPlugin;

var config = new ConfigurationBuilder()
    .AddJsonFile("appsettings.json")
    .Build();

var endpoint = config["AI:Endpoint"];
var apiKey = config["AI:ApiKey"];
var apiVersion = config["AI:ApiVersion"];
var deploymentName = config["AI:DeploymentName"];

var builder = Kernel.CreateBuilder();

builder.AddAzureOpenAIChatCompletion(
    deploymentName: deploymentName,
    endpoint: endpoint,
    apiKey: apiKey,
    apiVersion: apiVersion
);

var kernel = builder.Build();

var plugin = new PizzaPlugin();
kernel.Plugins.AddFromObject(plugin);

var chatCompletion = kernel.GetRequiredService<IChatCompletionService>();

var chatHistory = new ChatHistory();
chatHistory.AddAssistantMessage("How can I help you?");
chatHistory.AddUserMessage("Can I order a plan Pepperoni pizza?");

var result = await chatCompletion.GetChatMessageContentAsync(chatHistory, new PromptExecutionSettings
{
    FunctionChoiceBehavior = FunctionChoiceBehavior.Auto()
}, kernel);

Console.WriteLine(result.Content);

Before you run this code, place breakpoints on the Kernel Functions in the plugin and then run the code. Hopefully all run’s ok and you’ll notice that the LLM (via Semantic Kernel) calls into the plugin methods. As you’ll hopefully see – it calls the menu to check whether the pizza supplied is one we make then orders it, if it does exist. Change the pizza to one we do not make (for example Chicken) and watch the process and output.

More settings

In the code above we’re using the PromptExecutionSettings but we can also use OpenAIPromptExecutionSettings instead, from this we can configure Open AI by setting the Temperature, MaxTokens and others, for example

var result = await chatCompletion.GetChatMessageContentAsync(chatHistory, new OpenAIPromptExecutionSettings
{
  Temperature = 0.7,
  FunctionChoiceBehavior = FunctionChoiceBehavior.Auto(),
  MaxTokens = 100
}, kernel);

These options are also settable in the AI Foundry. Temperature controls the randomness of the model, for example a lower value is more deterministic whereas the higher is more random the results are, the default is 1.0.

  • 0.2-0.5 is more deterministic and produces more focused outputs
  • 0.8-1.0 allows for more diverse and creative responses

Secure data in plain sight (using .NET)

Often we’ll come across situations where we need to put a password into our application. For example securing a connection string or login details for a services.

This post is more a dissection of the post Encrypting Passwords in a .NET app.config File than anything new, but interestingly through up a bunch of things to look at.

String & SecureString

Let’s start by looking at String storage. Ultimately strings are stored in memory in clear text and the disposal of them is determined by the garbage collection, hence their lifespan is non-deterministic so we could dump the strings using WinDbg or similar tools in clear text.

SecureString is part of the System.Security assembly and supplies a mechanism for encrypting a string in memory and also, when no longer required, can be disposed of.

The following code shows how we might use a SecureString.

public static class Secure
{
   private static byte[] ENTROPY = System.Text.Encoding.Unicode.GetBytes("Salt Is Not A Password");

   public static string Encrypt(SecureString input)
   {
      return Encrypt(ToInsecureString(input));
   }

   public static string Encrypt(string input)
   {
      var encryptedData = System.Security.Cryptography.ProtectedData.Protect(
         System.Text.Encoding.Unicode.GetBytes(input),
         ENTROPY,
         System.Security.Cryptography.DataProtectionScope.CurrentUser);
      return Convert.ToBase64String(encryptedData);
   }

   public static SecureString DecryptToSecureString(string encryptedData)
   {
      var result = DecryptToString(encryptedData);
      return result != null ? ToSecureString(result) : new SecureString();
   }

   public static string DecryptToString(string encryptedData)
   {
      try
      {
         var decryptedData = System.Security.Cryptography.ProtectedData.Unprotect(
            Convert.FromBase64String(encryptedData),
            ENTROPY,
            System.Security.Cryptography.DataProtectionScope.CurrentUser);
         return System.Text.Encoding.Unicode.GetString(decryptedData);
      }
      catch
      {
         return null;
      }
   }

   public static SecureString ToSecureString(string input)
   {
      var secure = new SecureString();
      foreach (var c in input)
      {
         secure.AppendChar(c);
      }
      secure.MakeReadOnly();
      return secure;
   }

   public static string ToInsecureString(SecureString input)
   {
      string returnValue;
      var ptr = System.Runtime.InteropServices.Marshal.SecureStringToBSTR(input);
      try
      {
         returnValue = System.Runtime.InteropServices.Marshal.PtrToStringBSTR(ptr);
      }
      finally
      {
         System.Runtime.InteropServices.Marshal.ZeroFreeBSTR(ptr);
      }
      return returnValue;
   }
}

References

Encrypting Passwords in a .NET app.config File
SecureString Class

A simple web API in various languages and deployable to Kubernetes (C#)

Introduction

I’m always interested in how different programming languages and their libs/frameworks tackle the same problem. Recently the topic of writing web API’s in whatever language we wanted came up and so I thought, well let’s try to do just that.

The service is maybe too simple for a really good explanation of the frameworks and language features of the languages I’m going to use, but at the same time, I wanted to just do the bare minimum to have something working but enough.

The service is a “echo” service, it will have an endpoint that simply passes back what’s sent to it (prefixed with some text) and also supply livez and readyz as I want to also create a Dockerfile and the associated k8s yaml files to deploy the service.

The healthz endpoint is deprecated as of k8s v1.16, so we’ll leave that one out.

It should be noted that there are (in some cases) other frameworks that can be used and optimisations, my interest is solely to get some basic Web API deployed to k8s that works, so you may have preferences for other ways to do this.

C# Minimal API

Let’s start with an ASP.NET core, minimal API, web API…

  • Create an ASP.NET core Web API project in Visual Studio
  • Enable container support and I’ve chosen Linux OS
  • Ensure Container build type is set to Dockerfile
  • I’m using minimal API so ensure “User Controllers” is not checked

Now let’s just replace Program.cs with the following

using Microsoft.AspNetCore.Diagnostics.HealthChecks;

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
builder.Services.AddHealthChecks();

var app = builder.Build();

if (app.Environment.IsDevelopment())
{
    app.UseSwagger();
    app.UseSwaggerUI();
}

app.UseHttpsRedirection();

app.MapGet("/echo", (string text) =>
    {
        app.Logger.LogInformation($"C# Echo: {text}");
        return $"Echo: {text}";
    })
    .WithName("Echo")
    .WithOpenApi();

app.MapHealthChecks("/livez");
app.MapHealthChecks("/readyz", new HealthCheckOptions
{
    Predicate = _ => true
});

app.Run();

Docker

Next we need to copy the Dockerfile from the csproj folder to the sln folder – for completeness here’s the Dockerfile generated by Visual Studio (comments removed)

FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS base
USER $APP_UID
WORKDIR /app
EXPOSE 8080
EXPOSE 8081

FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build
ARG BUILD_CONFIGURATION=Release
WORKDIR /src
COPY ["EchoService/EchoService.csproj", "EchoService/"]
RUN dotnet restore "./EchoService/EchoService.csproj"
COPY . .
WORKDIR "/src/EchoService"
RUN dotnet build "./EchoService.csproj" -c $BUILD_CONFIGURATION -o /app/build

FROM build AS publish
ARG BUILD_CONFIGURATION=Release
RUN dotnet publish "./EchoService.csproj" -c $BUILD_CONFIGURATION -o /app/publish /p:UseAppHost=false

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "EchoService.dll"]

Note: In Linux port 80 might be locked down, hence we use port 8080 by default.

To build this, run

docker build -t putridparrot.echo-service:v1 .

Don’t forget to change the name to your preferred name.

and to test this, run

docker run -p 8080:8080 putridparrot.echo-service:v1

and we can text using “http://localhost:8080/echo?text=Putridparrot”

Kubernetes

If all wen well we’ve not tested our application and see it working from a docker image, so now we need to create the deployment etc. for Kubernete’s. Let’s assume you’ve pushed you image to Docker or another container registry such as Azure – I’m call my container registry putridparrotreg.

I’m also not going to use helm at this point as I just want a (relatively) simple yaml file to run from kubectl, so create a deployment.yaml file, we’ll store all the configurations, deployment, service and ingress in this one file jus for simplicity.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: echo
  namespace: dev
  labels:
    app: echo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: echo
  template:
    metadata:
      labels:
        app: echo
    spec:
      containers:
      - name: echo
        image: putridparrotreg/putridparrot.echo-service:v1
        ports:
        - containerPort: 8080
        resources:
          requests:
            memory: "100Mi"
            cpu: "100m"
          limits:
            memory: "200Mi"
            cpu: "200m"
        livenessProbe:
          httpGet:
            path: /livez
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /readyz
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 5

---
apiVersion: v1
kind: Service
metadata:
  name: echo-service
  namespace: dev
  labels:
    app: echo
spec:
  type: ClusterIP
  selector:
    app: echo 
  ports:
  - name: http
    port: 80
    targetPort: 8080
    protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: echo-ingress
  namespace: dev
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: mydomain.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: echo-service
            port:
              number: 80

Don’t forget to change the “host” and image to suit, also this assume you created a namespace “dev” for your app. See Creating a local container registry for information on setting up your own container registry.

Microsoft’s Dependency Injection

Dependency injection has been a fairly standard part of development for a while. You’ve probably used Unity, Autofac, Ninject and others in the past.

Frameworks, such as ASP.NET core and MAUI use the Microsoft Dependency Injection package (Microsoft.Extensions.DependencyInjection) and we can use this with any other type of application.

For example if we create ourselves a Console application, then add the package Microsoft.Extensions.DependencyInjection. Now can then use the following code

var serviceCollection = new ServiceCollection();

// add our services

var serviceProvider = serviceCollection.BuildServiceProvider();

and it’s as simple as that.

The Microsoft.Extensions.DependencyInjection has most of the features we require for most dependency injection scenarios (Note: it does not support property injection for example). We can add services as…

  • Transient an instance created for every request, for example
    serviceCollection.AddTransient<IPipeline, Pipeline>();
    // or
    serviceCollection.AddTransient<Pipeline>();
    
  • Singleton a single instance created and reused on every request, for example
    serviceCollection.AddSingleton<IPipeline, Pipeline>();
    // or
    serviceCollection.AddSingleton<Pipeline>();
    
  • Scoped when we create a scope we get the same instance within the scope. In ASP.NET core a scope is created for each request
    serviceCollection.AddScoped<IPipeline, Pipeline>();
    // or
    serviceCollection.AddScoped<Pipeline>();
    

For the services registered as “scoped”, if no scope is created then the code will work more or less like a singleton, i.e. the scope is the whole application, but if we want to mimic ASP.NET (for example) we would create a scope per request and we would do this by using the following

using var scope = serviceProvider.CreateScope();

var pipeline1 = scope.ServiceProvider.GetRequiredService<Pipeline>();
var pipeline2 = scope.ServiceProvider.GetRequiredService<Pipeline>();

in the above code the same instance of the Pipeline is returned for each GetRequiredService call, but when the scope is disposed of or another scope created then a new instance for that scope will be returned.

The service provider is used to create/return instances of our services. We can use GetRequiredService which will throw and InvalidOperationException if the service is not registered or we might use GetService which will not throw an exception but will either return the instance or null.

Multiple services of the same type

If we register multiple implementations of our services like this

serviceCollection.AddTransient<IPipeline, Pipeline1>();
serviceCollection.AddTransient<IPipeline, Pipeline2>();

and we use the service provider and use GetRequiredService<IPipeline> we will get a Pipeline2 – it will be the the last registered type.

If we want to get all services registered for type IPipeline then we use GetServices<IPipeline> and we’ll get an IEnumerable of IPipelines, so if we have a service which take an IPipeline, we’d need to declare it as follows

public class Context(IEnumerable<IPipeline> pipeline)
{
}

Finally we have the keyed option, this is allows use to register multiple variations of an interface (for example) and give each a key/name, for example

serviceCollection.AddKeyedTransient<IPipeline, Pipeline1>("one");
serviceCollection.AddKeyedTransient<IPipeline, Pipeline2>("two");

Now these will not be returned when using GetServices<IPipeline> instead it’s expected that we get the service by the key, i.e.

var pipeline = serviceProvider.GetKeyedService<IPipeline>("one");

When declaring the requirement in our dependent classes we would use the FromKeyedServicesAttribute like this

public class Context([FromKeyedServices("one")] IPipeline pipeline)
{
}

StringSyntaxAttribute and the useful hints on DateTime ToString

For a while I’ve used the DateTime ToString method and noticed the “hint” for showing the possible formats, but I’ve not really thought about how this happens, until now.

Note: This attribute came in for projects targeting .NET 7 or later.

The title of this post gives away the answer to how this all works, but let’s take a look anyway…

If you type

DateTime.Now.ToString("

Visual Studio kindly shows a list of different formatting such as Long Date, Short Date etc.

We can use this same technique in our own code (most likely libraries etc.) by simply adding the StringSyntax attribute to our method parameter(s).

For example

static void Write(
   [StringSyntax(StringSyntaxAttribute.DateOnlyFormat)] string input)
{
    Console.WriteLine(input);
}

This attribute does not enforce the format (in the example above), i.e. yo can enter whatever you like as a string. It just gives you some help (or hint) as to possible values. In the case of the DateOnlyFormat these are possible date formatters. StringSyntax actually supports other syntax hints such as DateTimeFormat, GuidFormat and more.

Sadly (at least at the time of writing) I don’t see any options for custom formats.

Messing around with MediatR

MediatR is an implementation of the Mediator pattern. It doesn’t match the pattern exactly, but as the creator, Jimmy Bogard states that “It matches the problem description (reducing chaotic dependencies), the implementation doesn’t exactly match…”. It’s worth reading his post You Probably Don’t Need to Worry About MediatR.

This pattern is aimed at decoupling the likes of business logic from a UI layer or request/response’s.

There are several ways we can already achieve this in our code, for example, using interfaces to decouple the business logic from the UI or API layers as “services” as we’ve probably all done for years. The only drawback of this approach is it requires the interfaces to be either passed around in our code or via DI and is a great way to do things. Another way to do this is, as used within UI, using WPF, Xamarin Forms, MAUI and others where we often use in-process message queues to send messages around our application tell it to undertake some task and this is essentially what MediatR is giving us.

Let’s have a look at using MediatR. I’m going to create an ASP.NET web API (obviously you could use MediatR in other types of solutions)

  • Create an ASP.NET Core Web API. I’m using Minimal API, so feel free to check that or stick with controllers as you prefer.
  • Add the nuget package MediatR
  • To the Program.cs file add
    builder.Services.AddMediatR(cfg => 
      cfg.RegisterServicesFromAssembly(typeof(Program).Assembly));
    

At this point we have MediatR registering services for us at startup. We can passing multiple assemblies to the RegisterServicesFromAssembly method, so if we have all our reqeust/response code in multiple assemblies we can supply just those assemblies. Obviously this makes our life simpler but at the cost of reflecting across our code at startup.

The ASP.NET Core Web API creates the WeatherForecast example, we’ll just use this for our sample code as well.

The first thing you’ll notice is that the route to the weatherforecast is tightly coupled to the sample code. Ofcourse it’s an example, so this is fine, but we’re going to clean things up here and move the implementation into a file named GetWeatherForecastHandler but before we do that…

Note: Ofcourse we could just move the weather forecast code into an WeatherForecastService, create an IWeatherForecastService interface and there’s no reason not to do that, MediatR just offers and alternative way of doing things.

MediatR will try to find a matching handler for your request. In this example we have no request parameters. This begs the question as to how MediatR will match against our GetWeatherForecastHandler. It needs a unique request type to map to our handler, in this case the simplest thing to do is create yourself the request type. Mine’s named GetWeatherForecast and looks like this

public record GetWeatherForecast : IRequest<WeatherForecast[]>
{
    public static GetWeatherForecast Default { get; } = new();
}

Note: I’ve created a static method so we’re not creating an instance for every call, however this is not required and obviously when you are passing parameters you will be creating an instance of a type each time – this does obviously concern me a little if we need high performance and are trying to write allocation free code, but then we’d do lots differently then including probably not using MediatR.

Now we’ll create the GetWeatherForecastHandler file and the code looks like this

public class GetWeatherForecastHandler : IRequestHandler<GetWeatherForecast, WeatherForecast[]>
{
  private static readonly string[] Summaries = new[]
  {
    "Freezing", "Bracing", "Chilly", "Cool", "Mild", "Warm", "Balmy", "Hot", "Sweltering", "Scorching"
  };

  public Task<WeatherForecast[]> Handle(GetWeatherForecast request, CancellationToken cancellationToken)
  {
    var forecast = Enumerable.Range(1, 5).Select(index =>
      new WeatherForecast
      {
        Date = DateOnly.FromDateTime(DateTime.Now.AddDays(index)),
        TemperatureC = Random.Shared.Next(-20, 55),
        Summary = Summaries[Random.Shared.Next(Summaries.Length)]
      })
    .ToArray();

    return Task.FromResult(forecast);
  }
}

At this point we’ve created a way for MediatR to find the required handler (i.e. using the GetWeatherForecast type) and we’ve created a handler to create the response. In this example we’re not doing any async work, so we just wrap the result in a Task.FromResult.

Next go back to the Program.cs or if you’ve used controllers, go to your controller. If using controller you’ll need the constructor to take the parameters IMediator mediator and assign to a readonly field in the usually way.

For our minimal API example, go back to the Program.cs file remove the summaries variable/code and then change the route code to look like this

app.MapGet("/weatherforecast",  (IMediator mediator) => 
  mediator.Send(GetWeatherForecast.Default))
.WithName("GetWeatherForecast")
.WithOpenApi();

We’re not really playing too nice in the code above, in that we’re not returning results code, so let’s add some basic result handling

app.MapGet("/weatherforecast",  async (IMediator mediator) => 
  await mediator.Send(GetWeatherForecast.Default) is var results 
    ? Results.Ok(results) 
    : Results.NotFound())
  .WithName("GetWeatherForecast")
  .WithOpenApi();

Now for each new HTTP method call, we would create a request object and a handler object. In this case we send no parameters, but as you can no doubt see, for a request that takes (for example) a string for your location, we’d create a specific type for wrapping that parameter and the handler can then be mapped to that request type.

In our example we used the MediatR Send method. This sends a request to a single handler and expects a response of some type, but MediatR also has the ability to Publish to multiple handlers. These types of handlers are different, firstly they need to implement the INotificationHandler interface and secondly no response is expected when using Publish. These sorts of handlers are more like event broadcasts, so you might use then to send a message to an email service or database code which sends out an email upon request or updates a database.

Or WeatherForecast sample doesn’t give me any good ideas for using Publish in it’s current setup, so let’s just assume we have a way to set the current location. Like I said this example’s a little contrived as we’re going to essentially set the location for everyone connecting to this service, but you get the idea.

We’re going to add a SetLocation request type that looks like this

public record SetLocation(string Location) : INotification;

Notice that for publish our type is implementing the INotification interface. Our handles look like this (my file is named SetLocationHandler.cs but I’ll put both handlers in there just to be a little lazy)

public class UpdateHandler1 : INotificationHandler<SetLocation>
{
  public Task Handle(SetLocation notification, CancellationToken cancellationToken)
  {
    Console.WriteLine(nameof(UpdateHandler1));
    return Task.CompletedTask;
  }
}

public class UpdateHandler2 : INotificationHandler<SetLocation>
{
  public Task Handle(SetLocation notification, CancellationToken cancellationToken)
  {
    Console.WriteLine(nameof(UpdateHandler2));
    return Task.CompletedTask;
  }
}

As you can see, the handlers need to implement INotificationHandler with the correct request type. In this sample we’ll just write messages to console, but you might have a more interesting set of handlers in mind.

Finally let’s add the following to the Program.cs to publish a message

app.MapGet("/setlocation", (IMediator mediator, string location) =>
  mediator.Publish(new SetLocation(location)))
.WithName("SetLocation")
.WithOpenApi();

When you run up your server and use Swagger or call the setlocation method via it’s URL you’ll see that all your handlers that handle the request get called.

Ofcourse we can also Send and Post messages/request from our handlers, so maybe we get the weather forecast data then publish a message for some logging system to update the logs.

MediatR also includes the ability to stream from a requests where our request type implements the IStreamRequest and our handlers implement IStreamRequestHandler.

If we create a simple request type but this one implements IStreamRequest for example

public record GetWeatherStream : IStreamRequest<WeatherForecast>;

and now add a handler which implements IStreamRequestHandler, something like this (which delay’s to just give a feel of getting data from somewhere else)

public class GetWeatherStreamHandler : IStreamRequestHandler<GetWeatherStream, WeatherForecast>
{
  public async IAsyncEnumerable<WeatherForecast> Handle(GetWeatherStream request, 
    [EnumeratorCancellation] CancellationToken cancellationToken)
  {
    var index = 0;
    while (!cancellationToken.IsCancellationRequested)
    {
      await Task.Delay(500, cancellationToken);
      yield return new WeatherForecast
      {
        Date = DateOnly.FromDateTime(DateTime.Now.AddDays(index)),
        TemperatureC = Random.Shared.Next(-20, 55),
        Summary = Data.Summaries[Random.Shared.Next(Data.Summaries.Length)]
      };

      index++;
      if(index > 10)
        break;
    }
  }
}

Finally we can declare our streaming route using Minimal API very simply, for example

app.MapGet("/stream", (IMediator mediator) =>
  mediator.CreateStream(new GetWeatherStream()))
.WithName("Stream")
.WithOpenApi();

Collection Expressions in C# 12

C# 12 includes something called Collection Expressions. These offer more generic way to create our collections from array-like syntax.

Let’s look first at the old style creation of an array of integers

var array = new [] { 1, 2, 3 };

This is simple enough array is of type int[]?. This way of creation arrays is not going away, but what if we want to change the array to a different collection then we end up using collection initializers like this

var list = new List<int> { 1, 2, 3 };

There’s nothing much wrong with this, but essentially we’re sort of doing the same thing, just with different syntax.

Collection expressions now allow us to use syntax such as (below) to create our collection regardless of type

int[] array = [1, 2, 3 ];
List<int> list = [1, 2, 3 ];

On the surface this may not seem a big deal, but imagine you’ve a class that accepts an int[] and maybe you change the type to a List, passing the values via the collection expression [] syntax means that part of your code remains unchanged, it just remains as [1, 2, 3].

Along with this we get to use the spread operator .. for example

List<int> list = [1, 2, 3 ];
int[] array = [.. list];

In this example we’ve created a list then basically copied the items to the array, but a spread operator can be used to concatenate values (or collections), such as

int[] array = [-3, -2, -1, 0, .. list];

Creating your own collections to use collection expressions

For many of the original types, such as List<T> the collection expression code is built in. But newer collections and, if we want, our own collection can take advantage of this syntax by following a minimal set of rules.

All we need to do is create our collection type and add the CollectionBuilderAttribute to it like this

[CollectionBuilder(typeof(MyCollection), nameof(MyCollection.Create))]
public class MyCollection<T>
{
   // our code
}

Now this is not going to work, the typeof expects a non-generic type, so we create a simple non-generic version of this class to handle the creation of the generic version. Also notice the CollectionBuilder expects the name of the method to call and expects a method that takes a single parameter of type ReadOnlySpan and returns the collection type, now initialized, like this

public class MyCollection
{
  public static MyCollection<T> Create<T>(ReadOnlySpan<T> items)
  {
     // returns a MyCollection<T>
  }
}

Let’s look at potential bare minimum implementation of this collection type which can be used with the collection expression syntax. Notice we will also need to implement IEnumerable and/or IEnumerable<T>

[CollectionBuilder(typeof(MyCollection), nameof(MyCollection.Create))]
public class MyCollection<T> : IEnumerable<T>
{
  public static readonly MyCollection<T> Empty = new(Array.Empty<T>());

  private readonly List<T> _innerCollection;

  internal MyCollection(T[]? items)
  {
    _innerCollection = items == null ? new List<T>() : [..items];
  }

  public T this[int index] => _innerCollection[index];
  public IEnumerator<T> GetEnumerator() => _innerCollection.GetEnumerator();
  IEnumerator IEnumerable.GetEnumerator() => _innerCollection.GetEnumerator();
}

public class MyCollection
{
  public static MyCollection<T> Create<T>(ReadOnlySpan<T> items)
  {
    return items.IsEmpty ? 
      MyCollection<T>.Empty : 
      new MyCollection<T>(items.ToArray());
  }
}

Ofcourse this is a silly example as we’re not adding anything that the inner List<T> cannot supply, but you get the idea. Now we can use the collection expression syntax on our new collection type

MyCollection<int> collection = [1, 2, 6, 7];