Category Archives: ASP.NET Core

Different ways of working with the HttpClient

A few years back I wrote the post .NET HttpClient – the correct way to use it.

I wanted to extend this discussion to the other ways of using/instantiating your HttpClient.

We’ll look at this from the view of the way we’d usually configure things for ASP.NET but these are not limited to ASP.NET.

IHttpClientFactory

Instead of passing an HttpClient to a class (such as a controller) we might prefer to use the IHttpClientFactory. This allows us to inject the factory and create an instance of an HttpClient using the method CreateClient, for example in our Program.cs

builder.Services.AddHttpClient();

then in our code which uses the IHttpClientFactory

public class ExternalService(IHttpClientFactory httpClientFactory)
{
   public Task LoadData()
   {
      var httpClient = httpClientFactory.CreateClient();
      // use the httpClient as usual
   }
}

This might not seem that much of an advancement from passing around HttpClient’s.

Where this is really useful is in allowing us to configure our HttpClient, such as base address, timeouts etc. In this situation we can use “named” clients. We’d assign a name to the client such as

builder.Services.AddHttpClient("external", client => {
  client.BaseAddress = new Uri("https://some_url");
  client.Timeout = TimeSpan.FromMinutes(3);
});

Now in usage we’d write the following

var httpClient = httpClientFactory.CreateClient("external");

We can now configured multiple clients with different names for use in different scenarios. With can also add policy and message handlers, for example

builder.Services.AddHttpClient("external", client => {
  // configuration for the client
}).
AddHttpMessageHandler<SomeMessageHandler>()
.AddPolicyHandler(SomePolicy());

Strongly Typed Client

Typed or Strongly Typed clients is another way of using the HttpClient, weirdly this looks much more like our old way of passing HttpClient’s around.

We create a class specific to the HttpClient usage, and have an HttpClient parameter on the constructor, for example

public class ExternalHttpClient : IExternalHttpClient
{
  private readonly HttpClient _httpClient;

  public ExternalHttpClient(HttpClient httpClient)
  {
    _httpClient = httpClient;
    _httpClient.BaseAddress = new Uri("https://some_url");
    _httpClient.Timeout = TimeSpan.FromMinutes(3);
  }

  public Task<SomeData> GetDataAsync()
  {
     // use _hhtpClient as usual
  }
}

We’d now need to add the client to the dependency injection in Program.cs using

builde.Services.AddHttpClient<IExternalHttpClient, ExternalHttpClient>();

Conclusions

The first question might be, why use strongly typed HttpClient’s over IHttpClientFactory. The most obvious response is that it gives a clean design, i.e. we don’t use “magic strings” we know which client does what as it includes the methods to call the endpoints for the developer. Essentially it encapsulates our HttpClient usage for a specific endpoint. It also gives us a cleaner way of testing our code by allowing us to mock the interface only (not having to mock an IHttpClientFactory etc.).

However the IHttpClientFactory way of working gives us central place where we’d generally have all our clients declared and configured, named clients allow us to switch between clients easily using the name, it also gives great integrations for things like Polly.

Calling Orleans from ASP.NET

In my last post Getting started with Orleans we covered a lot of ground on the basics of setting up and using Orleans. It’s quite likely you’ll be wanting to use ASP.NET as an entry point to your Orleans code, so let’s look at how we might set this up.

Create yourself an ASP.NET core project, I’m using controllers but minimal API is also fine (I just happened to have the option to use controllers selected).

After you’ve created your application, clear out the weather forecast code etc. if you created the default sample.

Add a folder for your grain(s) (mine’s named Grains, not very imaginative) and I’ve added the following files and code…

IDocumentGrain.cs

public interface IDocumentGrain : IGrainWithGuidKey
{

    Task<string> GetContent();
    Task UpdateContent(string content);
    Task<DocumentMetadata> GetMetadata();
    Task Delete();
}

DocumentGrain.cs

public class DocumentGrain([PersistentState("doc", "documentStore")] IPersistentState<DocumentState> state)
    : Grain, IDocumentGrain
{
    public Task<string> GetContent()
    {
        // State is hydrated automatically on activation
        return Task.FromResult(state.State.Content);
    }

    public async Task UpdateContent(string content)
    {
        state.State.Content = content;
        state.State.LastUpdated = DateTime.UtcNow;
        await state.WriteStateAsync(); // persist changes
    }

    public Task<DocumentMetadata> GetMetadata()
    {
        var metadata = new DocumentMetadata
        {
            Title = state.State.Title,
            LastUpdated = state.State.LastUpdated
        };
        return Task.FromResult(metadata);
    }

    public async Task Delete()
    {
        await state.ClearStateAsync(); // wipe persisted state
    }
}

DocumentMetadata.cs

[GenerateSerializer]
public class DocumentMetadata
{
    [Id(0)]
    public string Title { get; set; } = string.Empty;

    [Id(1)]
    public DateTime LastUpdated { get; set; }
}

DocumentState.cs

public class DocumentState
{
    public string Title { get; set; } = string.Empty;
    public string Content { get; set; } = string.Empty;
    public DateTime LastUpdated { get; set; }
}

Now we’ll add the DocumentController.cs in the Controllers folder

[ApiController]
[Route("api/[controller]")]
public class DocumentController(IClusterClient client) : ControllerBase
{
    [HttpGet("{id}")]
    public async Task<string> Get(Guid id)
    {
        var grain = client.GetGrain<IDocumentGrain>(id);
        return await grain.GetContent();
    }
}

Note: As we touched on in the previous post, we just use grains as if they already exist, the Orleans runtime will create and activate them if they do not exist or return them if the already exist.

Finally in Program.cs add the following code after builder.Services.AddControllers();

builder.Host.UseOrleans(silo =>
{
    silo.UseLocalhostClustering();
    silo.AddMemoryGrainStorage("documentStore");
    silo.UseDashboard(options =>
    {
        options.HostSelf = true;
        options.Port = 7000;
    });
});

When we run this application we will need to pass a GUID (as we’re using IGrainWithGuidKey) for example https://localhost:7288/api/document/B5D4A805-80C3-4239-967B-937A5A0E9250 and this will obviously send this request to the DocumentController Get endpoint and this uses a grain based upon the supplied id and calls the grain method GetContent which gets the current state Content property.

I’ve not added code to call the other methods on the grain, but examples are listed for how these might look in the code above.

Increasing the body size of requests (with your ASP.NET core application within Kubernetes)

I cam across an interesting issue whereby we wanted to allow larger files to be uploaded through our ASP.NET core API, through to an Azure Function. All this hosted within Kubernetes.

The first thing is, if you’re running through something like Cloudflare, Akamai, Traffic Manager, changes there are outside the scope of this post.

Kubernetes Ingress

Let’s first look at Kubernetes, the ingress controller to you application may have something like this

className: nginx
annotations:
  nginx.ingress.kubernetes.io/proxy-buffer-size: "100m"
  nginx.ingress.kubernetes.io/proxy-body-size: "100m"
...

In the above we set the buffer and body size to 100MB – now one thing to note is that when we had this closer to the actual file size we wanted to support, the request body seemed larger, so you might need to tweak things a little.

Kestrel

The change in Kubernetes ingress now allows requests of upto 100MB but you may now find the request rejected by ASP.NET core, or more specifically Kestrel.

Kestrel (at the time of writing) has a default MaxRequestBodySize of 30MB, so we need to add the following

builder.WebHost.ConfigureKestrel(serverOptions =>
{
  serverOptions.Limits.MaxRequestBodySize = 104857600; // 100 MB in bytes
});

Azure Functions

Next up, we’re using Azure functions and by default (when on the pro consumption plan) is 100MB, however if you need to or want to change/fix this in place, you can edit the host.json file to include this

"http": {
  "maxRequestBodySize": 100
}

Obviously if you have code in place anywhere that also acts as a limit, you’ll need to amend that as well.

Anything else?

Depending on the size of files and the time it takes to process, you might also need to review your timouts on HttpClient or whatever mechanism you’re using.

Output caching in ASP.NET core

Output caching can be used on your endpoints so that if the same request comes into your endpoint within an “cache expiry” time then the endpoint will not get called and the stored/cached response from a previous call will be returned.

To make that clearer the endpoint’s method will NOT be called, the response is cached and hence returned via the ASP.NET middleware.

This is particularly useful for static or semi-static context.

Out of the box, the OutputCache can handle caching for different query parameters and routes can be easily set up to handle caching.

I’m going to setup the output cache to use Redis

builder.AddRedisOutputCache("cache");

builder.Services.AddOutputCache(options =>
{
    options.AddPolicy("ShortCache", builder => builder.Expire(TimeSpan.FromSeconds(10)));
});

Now from a minimal API endpoint we can apply output caching using CacheOutput as below

app.MapGet("/cached/{id}", (int id) => new
    {
        Message = $"Output Cache {id}",
        Timestamp = DateTime.UtcNow
    })
    .CacheOutput(c => c.VaryByValue(
        ctx => new KeyValuePair<string, string>("id", ctx.Request.RouteValues["id"]?.ToString() ?? string.Empty)));

Each unique id gets its own cached response.

The endpoint is only executed once per id within the cache duration.
– The VaryByValue method lets you define custom cache keys based on route values, headers, or query strings.
– Without this, /cache/1 and /cache/2 might share a cache entry or overwrite each other depending on the default key behavior.

app.MapGet("/cached-query", (int id) => new
    {
        Message = $"Output Cache {id}",
        Timestamp = DateTime.UtcNow
    })
    .CacheOutput();
app.UseOutputCache();

A simple web API in various languages and deployable to Kubernetes (C#)

Introduction

I’m always interested in how different programming languages and their libs/frameworks tackle the same problem. Recently the topic of writing web API’s in whatever language we wanted came up and so I thought, well let’s try to do just that.

The service is maybe too simple for a really good explanation of the frameworks and language features of the languages I’m going to use, but at the same time, I wanted to just do the bare minimum to have something working but enough.

The service is a “echo” service, it will have an endpoint that simply passes back what’s sent to it (prefixed with some text) and also supply livez and readyz as I want to also create a Dockerfile and the associated k8s yaml files to deploy the service.

The healthz endpoint is deprecated as of k8s v1.16, so we’ll leave that one out.

It should be noted that there are (in some cases) other frameworks that can be used and optimisations, my interest is solely to get some basic Web API deployed to k8s that works, so you may have preferences for other ways to do this.

C# Minimal API

Let’s start with an ASP.NET core, minimal API, web API…

  • Create an ASP.NET core Web API project in Visual Studio
  • Enable container support and I’ve chosen Linux OS
  • Ensure Container build type is set to Dockerfile
  • I’m using minimal API so ensure “User Controllers” is not checked

Now let’s just replace Program.cs with the following

using Microsoft.AspNetCore.Diagnostics.HealthChecks;

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
builder.Services.AddHealthChecks();

var app = builder.Build();

if (app.Environment.IsDevelopment())
{
    app.UseSwagger();
    app.UseSwaggerUI();
}

app.UseHttpsRedirection();

app.MapGet("/echo", (string text) =>
    {
        app.Logger.LogInformation($"C# Echo: {text}");
        return $"Echo: {text}";
    })
    .WithName("Echo")
    .WithOpenApi();

app.MapHealthChecks("/livez");
app.MapHealthChecks("/readyz", new HealthCheckOptions
{
    Predicate = _ => true
});

app.Run();

Docker

Next we need to copy the Dockerfile from the csproj folder to the sln folder – for completeness here’s the Dockerfile generated by Visual Studio (comments removed)

FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS base
USER $APP_UID
WORKDIR /app
EXPOSE 8080
EXPOSE 8081

FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build
ARG BUILD_CONFIGURATION=Release
WORKDIR /src
COPY ["EchoService/EchoService.csproj", "EchoService/"]
RUN dotnet restore "./EchoService/EchoService.csproj"
COPY . .
WORKDIR "/src/EchoService"
RUN dotnet build "./EchoService.csproj" -c $BUILD_CONFIGURATION -o /app/build

FROM build AS publish
ARG BUILD_CONFIGURATION=Release
RUN dotnet publish "./EchoService.csproj" -c $BUILD_CONFIGURATION -o /app/publish /p:UseAppHost=false

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "EchoService.dll"]

Note: In Linux port 80 might be locked down, hence we use port 8080 by default.

To build this, run

docker build -t putridparrot.echo-service:v1 .

Don’t forget to change the name to your preferred name.

and to test this, run

docker run -p 8080:8080 putridparrot.echo-service:v1

and we can text using “http://localhost:8080/echo?text=Putridparrot”

Kubernetes

If all wen well we’ve not tested our application and see it working from a docker image, so now we need to create the deployment etc. for Kubernete’s. Let’s assume you’ve pushed you image to Docker or another container registry such as Azure – I’m call my container registry putridparrotreg.

I’m also not going to use helm at this point as I just want a (relatively) simple yaml file to run from kubectl, so create a deployment.yaml file, we’ll store all the configurations, deployment, service and ingress in this one file jus for simplicity.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: echo
  namespace: dev
  labels:
    app: echo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: echo
  template:
    metadata:
      labels:
        app: echo
    spec:
      containers:
      - name: echo
        image: putridparrotreg/putridparrot.echo-service:v1
        ports:
        - containerPort: 8080
        resources:
          requests:
            memory: "100Mi"
            cpu: "100m"
          limits:
            memory: "200Mi"
            cpu: "200m"
        livenessProbe:
          httpGet:
            path: /livez
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /readyz
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 5

---
apiVersion: v1
kind: Service
metadata:
  name: echo-service
  namespace: dev
  labels:
    app: echo
spec:
  type: ClusterIP
  selector:
    app: echo 
  ports:
  - name: http
    port: 80
    targetPort: 8080
    protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: echo-ingress
  namespace: dev
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: mydomain.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: echo-service
            port:
              number: 80

Don’t forget to change the “host” and image to suit, also this assume you created a namespace “dev” for your app. See Creating a local container registry for information on setting up your own container registry.

Logging and Application Insights with ASP.NET core

Obviously when you’re running an ASP.NET core application in Azure, we’re going to want the ability to capture logs to Azure. This usually means logging to Application Insights.

Adding Logging

Let’s start out by just looking at what we need to do to enable logging from ASP.NET core.

Logging is included by default in the way of the ILogger interface (ILogger<T>), hence we can inject into our code like this (this example uses minimal API)

app.MapGet("/test", (ILogger<Program> logger) =>
{
    logger.LogCritical("Critical Log");
    logger.LogDebug("Debug Log");
    logger.LogError("Error Log");
    logger.LogInformation("Information Log");
    logger.LogTrace("Trace Log");
    logger.LogWarning("Warning Log");
})
.WithName("Test")
.WithOpenApi();

To enable/filter logging we have something like the following within the appsettings.json file

{
  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft.AspNetCore": "Warning"
    }
  },
}

The LogLevel, Default section sets the minimum logging level for all categories. So for example a Default of Information means only logging of Information level and above (i.e. Warning, Error and Critical) are captured.

The Microsoft.AspNetCore is a category specific logging in that it logs Microsoft.AspNetCore namespace logging using the supplied log level. Because we can configure by namespace we can also use categories such as Microsoft, System, Microsoft.Hosting.Lifetime. We can also do the same with our code, i.e. MyApp.Controllers, so this allows us to really start to tailor different sections of our application an what gets captured in the logs.

Logging Levels

There various logging levels are as follows

  • LogLevel.Trace: The most detailed level, use for debugging and tracing (useful for entering/existing methods and logging variables).
  • LogLevel.Debug: Detailed but less so than Trace (useful for debugging and workflow logging).
  • LogLevel.Information: Information messages at a higher level than the previous two levels (useful for logging steps of processing code).
  • LogLevel.Warning: Indicates potentially problems that do not warrant error level logging.
  • LogLevel.Error: Use for logging errors and exceptions and other failures.
  • LogLevel.Critical: Critical issues that may cause an application to fail, such as those that might crash your application. Could also include things like missing connection strings etc.
  • LogLevel.None: Essentially disables logging

Application Insights

Once you’ve created an Azure resource group and/or Application Insights service, you’ll be able to copy the connection string to connect to Application Insights from your application.

Before we can use Application Insights in our application we’ll need to

  • Add the nuget package Microsoft.ApplicationInsights.AspNetCore to our project
  • Add the ApplicationInsights section to the appsettings.json file, something this
    "ApplicationInsights": {
      "InstrumentationKey": "InstrumentationKey=xxxxxx",
      "LogLevel": {
        "Default": "Information",
        "Microsoft": "Warning"
      }
    },
    

    We can obviously set the InstrumentKey in code if preferred, but the LogLevel is specific to what is captured within Application Insights

  • Add the following to the Program.cs file below CreateBuilder

    var configuration = builder.Configuration;
    
    builder.Services.AddApplicationInsightsTelemetry(options =>
    {
        options.ConnectionString = configuration["ApplicationInsights:InstrumentationKey"];
    });
    

Logging Providers in code

We can also add logging via code, so for example after the CreateBuilder line in Program.cs we might have

builder.Logging.ClearProviders();
builder.Logging.AddConsole();
builder.Logging.AddDebug(); 

In the above we start by clearing all currently logging providers, the we add a provider for logging to console and debug. The appsettings.json log levels are still relevant to which logs we wish to capture.

Using secrets in your appsettings.json via Visual Studio 2022 and dotnet CLI

You’ve got yourself an appsettings.json file for your ASP.NET core application and you’re using sensitive data, such as passwords or other secrets. Now you obviously don’t want to commit those secrets to source control, so you’re not going to want to store these values in your appsettings.json file.

There’s several ways to achieve this, one of those is to use Visual Studio 2022 “Manage User Secrets” option which is on the context menu off of your project file. There’s also the ability to use to dotnet CLI for this as we’ll see later.

This context menu option will create a secrets.json in %APPDATA%\Microsoft\UserSecrets\{Guid}. The GUID is stored within your .csproj in a PropertyGroup like this

<UserSecretsId>0e6abf63-deda-47fc-9a80-1cb56abaeead</UserSecretsId>

So the secrets file can be used like this

{
  "ConnectionStrings:DefaultConnection": "my-secret"
}

and this will map to your appsettings.json, that might look like this

{
  "ConnectionStrings": {
    "DefaultConnection": "not set"
  },
}

Now we can access the configuration in the usual way, for example

builder.Configuration.AddUserSecrets<Program>();

var app = builder.Build();
var connectionString = app.Configuration.GetSection("ConnectionStrings:DefaultConnection");
var defaultConnection = connectionString.Value;

When somebody else clones your repository you’ll need to recreate the secrets file, we could use _dotnet user-secrets_ for example

dotnet user-secrets set "ConnectionStrings:DefaultConnection" "YourConnectionString"

and you can list the secrets using

dotnet user-secrets list

Disable the Kestrel server header

We generally don’t want to expose information about the server we’re running our ASP.NET core application on.

In the case of Kestrel we can disable the server header using

var builder = WebApplication.CreateBuilder(args); 

builder.WebHost.UseKestrel(options => 
   options.AddServerHeader = false);

Azure web app with IIS running ASP.NET core/Kestrel

When you deploy your ASP.NET core (.NET 8) to an Azure web app, you’ll have likely created the app to work with Kestrel (so you can deploy to pretty much any environment). But when you deploy as an Azure Web App, you’re essentially deploying to an IIS application.

So we need for IIS to simply proxy across to our Kestrel app. We achieve this by adding a Web.config to the root of our published app. and we’ll have configuration such as below

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <location path="." inheritInChildApplications="false">
    <system.webServer>
      <handlers>
        <add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModuleV2" resourceType="Unspecified" />
      </handlers>
      <aspNetCore processPath=".\MyAspNetCoreApp.exe" stdoutLogEnabled="false" stdoutLogFile="\\?\%home%\LogFiles\stdout" hostingModel="inprocess" />
    </system.webServer>
  </location>
</configuration>

Is my ASP.NET core application running in Kubernetes (or Docker) ?

I came across a situation where I needed to change some logic in my ASP.NET core application dependant upon whether I was running inside Kubernetes or an Azure Web app, luckily there is an environment variable that we can use (and whilst we’re at it, there’s one for Docker as well)

var isHostedInKubernetes = 
   Environment.GetEnvironmentVariable("KUBERNETES_SERVICE_HOST") != null;
var isHostedInDocker = 
   Environment.GetEnvironmentVariable("DOTNET_RUNNING_IN_CONTAINER") == "true";