Category Archives: Programming

Scheduled Azure Devops pipelines

I wanted to run some tasks once a day. The idea being we run application to check from any drift/changes to configuration etc. Luckily this is simple in Azure devops.

We create a YAML pipeline with no trigger and create a cronjob style schedule instead as below

trigger: none

schedules:
- cron: "0 7 * * *"
  displayName: Daily
  branches:
    include:
    - main
  always: true

pool:
  vmImage: 'ubuntu-latest'

steps:
- task: UseDotNet@2
  inputs:
    packageType: 'sdk'
    version: '10.x'

- script: dotnet build ./tools/TestDrift/TestDrift.csproj -c Release
  displayName: Test for drift

- script: |
    dotnet ./tools/TestDrift/bin/Release/net10.0/TestDrift.dll
  displayName: Run Test for drift

- task: PublishTestResults@1
  inputs:
    testResultsFormat: 'JUnit'
    testResultsFiles: ./tools/TestDrift/bin/Release/net10.0/drift-results.xml
    failTaskOnFailedTests: true

In this example we’re publishing test results. Azure devops supports several formats, see the testResultsFormat variable. We’re just creating an XML file named drift-results.xml with the following format


<testsuite tests="0" failures="0">
  <testcase name="check site" />
  <testcase name="check pipeline">
    <failure message="pipeline check failed" />
  </testcase>
</testsuite>

In C# we’d do something like

var suite = new XElement("testsuite");
var total = GetTotalTests();
var failures = 0;

var testCase = new XElement("testcase",
   new XAttribute("name", "check pipeline")
);

// run some test
var success = RunSomeTest();

if(!success)
{
  failures++;
  testCase.Add(new XElement("failure",
    new XAttribute("message", "Some test name")
  ));
}

suite.Add(testCase);

// completed
suite.SetAttributeValue("tests", total);
suite.SetAttributeValue("failures", failures);

var exeDir = Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location)!;
var outputPath = Path.Combine(exeDir, "tls-results.xml");

File.WriteAllText(outputPath, suite.ToString());

Using one of the valid formats, such as the JUnit format, will also result in Azure pipeline build showing a Test tab with our test results listed.

Different ways of working with the HttpClient

A few years back I wrote the post .NET HttpClient – the correct way to use it.

I wanted to extend this discussion to the other ways of using/instantiating your HttpClient.

We’ll look at this from the view of the way we’d usually configure things for ASP.NET but these are not limited to ASP.NET.

IHttpClientFactory

Instead of passing an HttpClient to a class (such as a controller) we might prefer to use the IHttpClientFactory. This allows us to inject the factory and create an instance of an HttpClient using the method CreateClient, for example in our Program.cs

builder.Services.AddHttpClient();

then in our code which uses the IHttpClientFactory

public class ExternalService(IHttpClientFactory httpClientFactory)
{
   public Task LoadData()
   {
      var httpClient = httpClientFactory.CreateClient();
      // use the httpClient as usual
   }
}

This might not seem that much of an advancement from passing around HttpClient’s.

Where this is really useful is in allowing us to configure our HttpClient, such as base address, timeouts etc. In this situation we can use “named” clients. We’d assign a name to the client such as

builder.Services.AddHttpClient("external", client => {
  client.BaseAddress = new Uri("https://some_url");
  client.Timeout = TimeSpan.FromMinutes(3);
});

Now in usage we’d write the following

var httpClient = httpClientFactory.CreateClient("external");

We can now configured multiple clients with different names for use in different scenarios. With can also add policy and message handlers, for example

builder.Services.AddHttpClient("external", client => {
  // configuration for the client
}).
AddHttpMessageHandler<SomeMessageHandler>()
.AddPolicyHandler(SomePolicy());

Strongly Typed Client

Typed or Strongly Typed clients is another way of using the HttpClient, weirdly this looks much more like our old way of passing HttpClient’s around.

We create a class specific to the HttpClient usage, and have an HttpClient parameter on the constructor, for example

public class ExternalHttpClient : IExternalHttpClient
{
  private readonly HttpClient _httpClient;

  public ExternalHttpClient(HttpClient httpClient)
  {
    _httpClient = httpClient;
    _httpClient.BaseAddress = new Uri("https://some_url");
    _httpClient.Timeout = TimeSpan.FromMinutes(3);
  }

  public Task<SomeData> GetDataAsync()
  {
     // use _hhtpClient as usual
  }
}

We’d now need to add the client to the dependency injection in Program.cs using

builde.Services.AddHttpClient<IExternalHttpClient, ExternalHttpClient>();

Conclusions

The first question might be, why use strongly typed HttpClient’s over IHttpClientFactory. The most obvious response is that it gives a clean design, i.e. we don’t use “magic strings” we know which client does what as it includes the methods to call the endpoints for the developer. Essentially it encapsulates our HttpClient usage for a specific endpoint. It also gives us a cleaner way of testing our code by allowing us to mock the interface only (not having to mock an IHttpClientFactory etc.).

However the IHttpClientFactory way of working gives us central place where we’d generally have all our clients declared and configured, named clients allow us to switch between clients easily using the name, it also gives great integrations for things like Polly.

Using Garnet

Garnet is a Redis (RESP) compatible cache from Microsoft Research, it’s used internally within Microsoft but as it’s a research project it’s possible the design etc. will change/evolve.

Not only is is Redis compatible, it’s written in C#, so ideal for .NET native environments. Check out the Garnet website for more information

I’ve shown code to interact from C#/.NET to Redis in the past, the same code will work with Garnet.

Here’s a Dockerfile to create an instance of Garnet

services:
  garnet:
    image: 'ghcr.io/microsoft/garnet'
    ulimits:
      memlock: -1
    container_name: garnet
    ports:
      - "6379:6379"

Let’s create a chatbot/agent using Azure AI Foundry and Semantic Kernel with C#

Setting up a project and model in AI Foundry

Let’s start by creating a project in https://ai.azure.com/

Note: I’m going to create a very simple, pretty standard chatbot for a pizza delivery service, so my project is going be called pizza, so you’re see this in the code but ofcourse replace with your preferred example or real code as this is the same setup that you’ll do for your own chatbot anyway.

  • Navigate to https://ai.azure.com/
  • Click Create new (if not available go to the Management Center | All Resources and the option should be there)
  • Select the Azure AI Foundry resource, then click Next
  • Supply a project name, resource group (or create one) and region – I left this as Sweden Central as I’m sure I read that it was a little more advanced than some regions, but do pick one which suites.
  • Click Create

Once completed you’ll be presented with the project page.

We’re not quite done as we need to deploy a model…

  • From the left hand nav. bar, locate My assets and click on Models + endpoints.
  • Click + Deploy model
  • Select Deploy base model from the drop down
  • From the Select a model popup, choose a mode, I’ve selected gpt-4o-mini which is a good model for chat completion.
  • Click Confirm
  • Give it a Deployment name and I’ve using the Deployment type as Standard and leaving all the new fields that appear as default
  • Click Deploy to assign the model to the project

We should now see some source code samples listed. We’ll partially be using in the code part of this, but before we move on we need an endpoint and an api key.

  • From this page on the Details tab copy the Endpoint Target URI – but we don’t need the whole this, from the project overview we can get the cutdown version but it’s basically this https://{your project}.cognitiveservices.azure.com/
  • From below the Target URI copy the Key

Writing the code

Create a Console application using Visual Studio.

Let’s begin be adding the following NuGet packages

dotnet add package Microsoft.SemanticKernel
dotnet add package Microsoft.Extensions.Configuration
dotnet add package Microsoft.Extensions.Configuration.Json

We’re using (as you can see) Semantic Kernel, now the versions seem to change pretty quickly at the moment so hopefully the code below will work but if not check against the version you’re using. For completeness here’s my versions

<ItemGroup>
  <PackageReference Include="Microsoft.Extensions.Configuration" Version="9.0.10" />
  <PackageReference Include="Microsoft.Extensions.Configuration.Json" Version="9.0.10" />
  <PackageReference Include="Microsoft.SemanticKernel" Version="1.66.0" />
</ItemGroup>

Create yourself an appsettings.json file, which should look like this

{
  "AI": {
    "Endpoint": "<The Endpoint>",
    "ApiKey": "<Api Key>",
    "ApiVersion": "2024-12-01-preview",
    "DeploymentName":  "pizza"
  }
}

Obviously you’ll need to supply your endpoint and API key that we copied after creating our AI Foundry project.

Now before we go onto look at implementing the Program.cs… I’m wanting this LLM to use some custom functions to fulfil a couple of tasks such as returning the menu and placing and order.

The AI Foundry project is an LLM which is our chat bot and it can answer questions and also generate hallucinations etc. For example without supplying my functions it will try to create a pizza menu for me, but that’s not a lot of use to our pizza place.

What I want is the Natural Language Processing (NLP) as well as the model’s “knowledge” to work with my functions – we implement this using Plugins.

What I want to happens is this

  • The customer connects to the chatbot/LLM
  • The customer then asks to either order a Pizza or for information on what Pizza’s we make, i.e. the menu
  • The LLM then needs to pass information to the PizzaPlugin which then returns information to the LLM to respond to the customer

Our PizzaPlugin is a standard C# class and we’re going to keep things simple, but you can imagine that this could call into a database or whatever you like to to get a menu and place an order.

public class PizzaPlugin
{
    [KernelFunction]
    [Description("Use this function to list the pizza's a customer can order")]
    public string ListMenu() => "We offer Meaty Feasty, Pepperoni, Veggie, and Cheese pizzas.";

    [KernelFunction]
    public string PlaceOrder(string pizzaType)
        => $"Order placed for: {pizzaType}. It will be delivered in 30 minutes.";
}

The KernelFunctionAttribute is registered/added to the Semantic Kernal to supply callable plugin functions. The DescriptionAttribute is optional, but recommended if you want the LLM to understand what the function does during auto function calling (which we will be using). I’ve left the other function without this DescriptionAttribute just to demonstrate it’s not required in this case, yet our function will/should still be called. If we have many similar functions this would be a helpful addition.

Note: Try to also function names that are clearly stating their usage, i.e. use action oriented naming.

Now let’s implement the Program.cs where we’ll, read in our configuration from the appsettings.json and then create the Semantic Kernel, add the Azure Open AI Chat services, add the plugin we just created then call into the AI Foundry LLM model we created earlier.

We’re NOT going create all the code for an actual console based chat app, hence we’ll just predefine the “chat” part with a ChatHistory object. In a real world you may wish to keep track of the chat history.

using Microsoft.Extensions.Configuration;
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;
using OpenAI.Chat;
using SemanticKernelTest.PizzaPlugin;

var config = new ConfigurationBuilder()
    .AddJsonFile("appsettings.json")
    .Build();

var endpoint = config["AI:Endpoint"];
var apiKey = config["AI:ApiKey"];
var apiVersion = config["AI:ApiVersion"];
var deploymentName = config["AI:DeploymentName"];

var builder = Kernel.CreateBuilder();

builder.AddAzureOpenAIChatCompletion(
    deploymentName: deploymentName,
    endpoint: endpoint,
    apiKey: apiKey,
    apiVersion: apiVersion
);

var kernel = builder.Build();

var plugin = new PizzaPlugin();
kernel.Plugins.AddFromObject(plugin);

var chatCompletion = kernel.GetRequiredService<IChatCompletionService>();

var chatHistory = new ChatHistory();
chatHistory.AddAssistantMessage("How can I help you?");
chatHistory.AddUserMessage("Can I order a plan Pepperoni pizza?");

var result = await chatCompletion.GetChatMessageContentAsync(chatHistory, new PromptExecutionSettings
{
    FunctionChoiceBehavior = FunctionChoiceBehavior.Auto()
}, kernel);

Console.WriteLine(result.Content);

Before you run this code, place breakpoints on the Kernel Functions in the plugin and then run the code. Hopefully all run’s ok and you’ll notice that the LLM (via Semantic Kernel) calls into the plugin methods. As you’ll hopefully see – it calls the menu to check whether the pizza supplied is one we make then orders it, if it does exist. Change the pizza to one we do not make (for example Chicken) and watch the process and output.

More settings

In the code above we’re using the PromptExecutionSettings but we can also use OpenAIPromptExecutionSettings instead, from this we can configure Open AI by setting the Temperature, MaxTokens and others, for example

var result = await chatCompletion.GetChatMessageContentAsync(chatHistory, new OpenAIPromptExecutionSettings
{
  Temperature = 0.7,
  FunctionChoiceBehavior = FunctionChoiceBehavior.Auto(),
  MaxTokens = 100
}, kernel);

These options are also settable in the AI Foundry. Temperature controls the randomness of the model, for example a lower value is more deterministic whereas the higher is more random the results are, the default is 1.0.

  • 0.2-0.5 is more deterministic and produces more focused outputs
  • 0.8-1.0 allows for more diverse and creative responses

Secure data in plain sight (using .NET)

Often we’ll come across situations where we need to put a password into our application. For example securing a connection string or login details for a services.

This post is more a dissection of the post Encrypting Passwords in a .NET app.config File than anything new, but interestingly through up a bunch of things to look at.

String & SecureString

Let’s start by looking at String storage. Ultimately strings are stored in memory in clear text and the disposal of them is determined by the garbage collection, hence their lifespan is non-deterministic so we could dump the strings using WinDbg or similar tools in clear text.

SecureString is part of the System.Security assembly and supplies a mechanism for encrypting a string in memory and also, when no longer required, can be disposed of.

The following code shows how we might use a SecureString.

public static class Secure
{
   private static byte[] ENTROPY = System.Text.Encoding.Unicode.GetBytes("Salt Is Not A Password");

   public static string Encrypt(SecureString input)
   {
      return Encrypt(ToInsecureString(input));
   }

   public static string Encrypt(string input)
   {
      var encryptedData = System.Security.Cryptography.ProtectedData.Protect(
         System.Text.Encoding.Unicode.GetBytes(input),
         ENTROPY,
         System.Security.Cryptography.DataProtectionScope.CurrentUser);
      return Convert.ToBase64String(encryptedData);
   }

   public static SecureString DecryptToSecureString(string encryptedData)
   {
      var result = DecryptToString(encryptedData);
      return result != null ? ToSecureString(result) : new SecureString();
   }

   public static string DecryptToString(string encryptedData)
   {
      try
      {
         var decryptedData = System.Security.Cryptography.ProtectedData.Unprotect(
            Convert.FromBase64String(encryptedData),
            ENTROPY,
            System.Security.Cryptography.DataProtectionScope.CurrentUser);
         return System.Text.Encoding.Unicode.GetString(decryptedData);
      }
      catch
      {
         return null;
      }
   }

   public static SecureString ToSecureString(string input)
   {
      var secure = new SecureString();
      foreach (var c in input)
      {
         secure.AppendChar(c);
      }
      secure.MakeReadOnly();
      return secure;
   }

   public static string ToInsecureString(SecureString input)
   {
      string returnValue;
      var ptr = System.Runtime.InteropServices.Marshal.SecureStringToBSTR(input);
      try
      {
         returnValue = System.Runtime.InteropServices.Marshal.PtrToStringBSTR(ptr);
      }
      finally
      {
         System.Runtime.InteropServices.Marshal.ZeroFreeBSTR(ptr);
      }
      return returnValue;
   }
}

References

Encrypting Passwords in a .NET app.config File
SecureString Class

A simple web API in various languages and deployable to Kubernetes (C#)

Introduction

I’m always interested in how different programming languages and their libs/frameworks tackle the same problem. Recently the topic of writing web API’s in whatever language we wanted came up and so I thought, well let’s try to do just that.

The service is maybe too simple for a really good explanation of the frameworks and language features of the languages I’m going to use, but at the same time, I wanted to just do the bare minimum to have something working but enough.

The service is a “echo” service, it will have an endpoint that simply passes back what’s sent to it (prefixed with some text) and also supply livez and readyz as I want to also create a Dockerfile and the associated k8s yaml files to deploy the service.

The healthz endpoint is deprecated as of k8s v1.16, so we’ll leave that one out.

It should be noted that there are (in some cases) other frameworks that can be used and optimisations, my interest is solely to get some basic Web API deployed to k8s that works, so you may have preferences for other ways to do this.

C# Minimal API

Let’s start with an ASP.NET core, minimal API, web API…

  • Create an ASP.NET core Web API project in Visual Studio
  • Enable container support and I’ve chosen Linux OS
  • Ensure Container build type is set to Dockerfile
  • I’m using minimal API so ensure “User Controllers” is not checked

Now let’s just replace Program.cs with the following

using Microsoft.AspNetCore.Diagnostics.HealthChecks;

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
builder.Services.AddHealthChecks();

var app = builder.Build();

if (app.Environment.IsDevelopment())
{
    app.UseSwagger();
    app.UseSwaggerUI();
}

app.UseHttpsRedirection();

app.MapGet("/echo", (string text) =>
    {
        app.Logger.LogInformation($"C# Echo: {text}");
        return $"Echo: {text}";
    })
    .WithName("Echo")
    .WithOpenApi();

app.MapHealthChecks("/livez");
app.MapHealthChecks("/readyz", new HealthCheckOptions
{
    Predicate = _ => true
});

app.Run();

Docker

Next we need to copy the Dockerfile from the csproj folder to the sln folder – for completeness here’s the Dockerfile generated by Visual Studio (comments removed)

FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS base
USER $APP_UID
WORKDIR /app
EXPOSE 8080
EXPOSE 8081

FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build
ARG BUILD_CONFIGURATION=Release
WORKDIR /src
COPY ["EchoService/EchoService.csproj", "EchoService/"]
RUN dotnet restore "./EchoService/EchoService.csproj"
COPY . .
WORKDIR "/src/EchoService"
RUN dotnet build "./EchoService.csproj" -c $BUILD_CONFIGURATION -o /app/build

FROM build AS publish
ARG BUILD_CONFIGURATION=Release
RUN dotnet publish "./EchoService.csproj" -c $BUILD_CONFIGURATION -o /app/publish /p:UseAppHost=false

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "EchoService.dll"]

Note: In Linux port 80 might be locked down, hence we use port 8080 by default.

To build this, run

docker build -t putridparrot.echo-service:v1 .

Don’t forget to change the name to your preferred name.

and to test this, run

docker run -p 8080:8080 putridparrot.echo-service:v1

and we can text using “http://localhost:8080/echo?text=Putridparrot”

Kubernetes

If all wen well we’ve not tested our application and see it working from a docker image, so now we need to create the deployment etc. for Kubernete’s. Let’s assume you’ve pushed you image to Docker or another container registry such as Azure – I’m call my container registry putridparrotreg.

I’m also not going to use helm at this point as I just want a (relatively) simple yaml file to run from kubectl, so create a deployment.yaml file, we’ll store all the configurations, deployment, service and ingress in this one file jus for simplicity.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: echo
  namespace: dev
  labels:
    app: echo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: echo
  template:
    metadata:
      labels:
        app: echo
    spec:
      containers:
      - name: echo
        image: putridparrotreg/putridparrot.echo-service:v1
        ports:
        - containerPort: 8080
        resources:
          requests:
            memory: "100Mi"
            cpu: "100m"
          limits:
            memory: "200Mi"
            cpu: "200m"
        livenessProbe:
          httpGet:
            path: /livez
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /readyz
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 5

---
apiVersion: v1
kind: Service
metadata:
  name: echo-service
  namespace: dev
  labels:
    app: echo
spec:
  type: ClusterIP
  selector:
    app: echo 
  ports:
  - name: http
    port: 80
    targetPort: 8080
    protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: echo-ingress
  namespace: dev
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: mydomain.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: echo-service
            port:
              number: 80

Don’t forget to change the “host” and image to suit, also this assume you created a namespace “dev” for your app. See Creating a local container registry for information on setting up your own container registry.

Publishing an application as a single file

For a while now we’re been able to turn our usual .exe and .dll’s into a single file, which ofcourse makes deployment very simple, let’s see what we need to change (in the .csproj of you EXE)

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>net8.0</TargetFramework>
    <PublishSingleFile>true</PublishSingleFile>
    <SelfContained>true</SelfContained>
    <RuntimeIdentifier>win-x64</RuntimeIdentifier>
  </PropertyGroup>
</Project>

The PublishSingleFile specifies if we should publish to a single EXE. The SelfContained when true means, include all the required .NET runtimes and can be run on any machine without requirement the .NET runtime to be installed. Finally the RuntimeIdentifier specifies the target platform and ensure the correct runtime files are included.

Note: We can specify the RuntimeIdentifier as part of the publish step if we prefer.

Options for this are

  • Windows
    • win-x86
    • win-x64
    • win-arm
    • win-arm64
  • Linux
    • linux-x64
    • linux-arm
    • linux-arm64
  • Mac OS
    • osx-x64
    • osx-arm64

Publishing

We would use the following command to publish our application

dotnet publish

More specifically we’d use commands such as

dotnet publish -r win-x64 -c Release
dotnet publish -r linux-x64 -c Release
dotnet publish -r osx-x64 -c Release

Where -r is the runtime (see the list above) and -c for the configuration.

Be aware that when you include the runtime you’re see an increase is the size of your self contained EXE, but now you just have the one file to release.

You run your server application and the port is not available

I’ve hit this problem before (see post . An attempt was made to access a socket in a way forbidden by its access permissions). The port was available one day and seemingly locked the next…

Try the following step to see if it’s on the exclusion port range

netsh interface ipv4 show excludedportrange protocol=tcp

If you do find the port is within one of the ranges then I’ve found (at least for the port I’ve been using) that I can stop and restart the winnat service, i.e.

Note: you may need to run these as administrator.

net stop winnat

then

net start winnat

and the excluded port list reduces in size.

Microsoft’s Dependency Injection

Dependency injection has been a fairly standard part of development for a while. You’ve probably used Unity, Autofac, Ninject and others in the past.

Frameworks, such as ASP.NET core and MAUI use the Microsoft Dependency Injection package (Microsoft.Extensions.DependencyInjection) and we can use this with any other type of application.

For example if we create ourselves a Console application, then add the package Microsoft.Extensions.DependencyInjection. Now can then use the following code

var serviceCollection = new ServiceCollection();

// add our services

var serviceProvider = serviceCollection.BuildServiceProvider();

and it’s as simple as that.

The Microsoft.Extensions.DependencyInjection has most of the features we require for most dependency injection scenarios (Note: it does not support property injection for example). We can add services as…

  • Transient an instance created for every request, for example
    serviceCollection.AddTransient<IPipeline, Pipeline>();
    // or
    serviceCollection.AddTransient<Pipeline>();
    
  • Singleton a single instance created and reused on every request, for example
    serviceCollection.AddSingleton<IPipeline, Pipeline>();
    // or
    serviceCollection.AddSingleton<Pipeline>();
    
  • Scoped when we create a scope we get the same instance within the scope. In ASP.NET core a scope is created for each request
    serviceCollection.AddScoped<IPipeline, Pipeline>();
    // or
    serviceCollection.AddScoped<Pipeline>();
    

For the services registered as “scoped”, if no scope is created then the code will work more or less like a singleton, i.e. the scope is the whole application, but if we want to mimic ASP.NET (for example) we would create a scope per request and we would do this by using the following

using var scope = serviceProvider.CreateScope();

var pipeline1 = scope.ServiceProvider.GetRequiredService<Pipeline>();
var pipeline2 = scope.ServiceProvider.GetRequiredService<Pipeline>();

in the above code the same instance of the Pipeline is returned for each GetRequiredService call, but when the scope is disposed of or another scope created then a new instance for that scope will be returned.

The service provider is used to create/return instances of our services. We can use GetRequiredService which will throw and InvalidOperationException if the service is not registered or we might use GetService which will not throw an exception but will either return the instance or null.

Multiple services of the same type

If we register multiple implementations of our services like this

serviceCollection.AddTransient<IPipeline, Pipeline1>();
serviceCollection.AddTransient<IPipeline, Pipeline2>();

and we use the service provider and use GetRequiredService<IPipeline> we will get a Pipeline2 – it will be the the last registered type.

If we want to get all services registered for type IPipeline then we use GetServices<IPipeline> and we’ll get an IEnumerable of IPipelines, so if we have a service which take an IPipeline, we’d need to declare it as follows

public class Context(IEnumerable<IPipeline> pipeline)
{
}

Finally we have the keyed option, this is allows use to register multiple variations of an interface (for example) and give each a key/name, for example

serviceCollection.AddKeyedTransient<IPipeline, Pipeline1>("one");
serviceCollection.AddKeyedTransient<IPipeline, Pipeline2>("two");

Now these will not be returned when using GetServices<IPipeline> instead it’s expected that we get the service by the key, i.e.

var pipeline = serviceProvider.GetKeyedService<IPipeline>("one");

When declaring the requirement in our dependent classes we would use the FromKeyedServicesAttribute like this

public class Context([FromKeyedServices("one")] IPipeline pipeline)
{
}

StringSyntaxAttribute and the useful hints on DateTime ToString

For a while I’ve used the DateTime ToString method and noticed the “hint” for showing the possible formats, but I’ve not really thought about how this happens, until now.

Note: This attribute came in for projects targeting .NET 7 or later.

The title of this post gives away the answer to how this all works, but let’s take a look anyway…

If you type

DateTime.Now.ToString("

Visual Studio kindly shows a list of different formatting such as Long Date, Short Date etc.

We can use this same technique in our own code (most likely libraries etc.) by simply adding the StringSyntax attribute to our method parameter(s).

For example

static void Write(
   [StringSyntax(StringSyntaxAttribute.DateOnlyFormat)] string input)
{
    Console.WriteLine(input);
}

This attribute does not enforce the format (in the example above), i.e. yo can enter whatever you like as a string. It just gives you some help (or hint) as to possible values. In the case of the DateOnlyFormat these are possible date formatters. StringSyntax actually supports other syntax hints such as DateTimeFormat, GuidFormat and more.

Sadly (at least at the time of writing) I don’t see any options for custom formats.