Multilingual support for a ASP.NET web API application

We sometimes wish to make our web API return error messages or other types of string data in different languages. The process for this is similar to MAUI and WinForms, we just do the following

  • Create a folder names Resources in our web APIT project
  • Add a RESX file, which we’ll name AppResources.resx. This will be the default language, so in my case this will include en-GB strings
  • Ensure the file has a Build Action of Embedded resource and Custom Tool of ResXFileCodeGenerator
  • Add a name (which is the key to your resource string) and then add the value. This is the string (i.e. the translated string) for the given key
  • Let’s add another RESX file, but this type name it AppResources.{language identifier}.resx, for example AppResources.de-DE.resx which will contain the German translation of the key/name’s
  • Again ensure the Build Action and Custom Tool are correctly set

The ResXFileCodeGenerator will generate properties in the AppResources class for us to access the resource strings. For example

AppResources.ExceptionMessage

If we need to test our translations without changing our OS language, we simply use code such as the following in the Program.cs of the web API

AppResources.Culture = new CultureInfo("de-DE");

Azure Functions

Azure functions (like AWS lambdas and GCP cloud functions) allow us to write serverless code literally just as functions, i.e. no need to fire up a web application or VM. Ofcourse just like Azure containers, there is a server component but we, the developer, need not concerns ourselves with handling configuration etc.

Azure functions will be spun up as and when required, meaning we will only be charged when they’re used. The downside of this is they have to spin up from a “cold” state. In other words the first person to hit your function will likely incur a performance hit whilst the function is started then invoked.

The other thing to remember is Azure functions are stateless. You might store state with a DB like CosmoDB, but essentially a function is invoked, does something then after a timeout period it’s shut back down.

Let’s create an example function and see how things work…

  • Create a new Azure Functions project
  • When you get to the options for the Function, select Http trigger and select Amonymous Authorization level
  • Complete the wizard by clicking the Create button

The Authorization level allows the function to be triggered without providing a key. The HTTP trigger, as it sounds, means the function is triggered by an HTTP request.

The following is basically the code that’s created from the Azure Function template

public static class ExampleFunction
{
  [FunctionName("Example")]
  public static async Task<IActionResult> Run(
        [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
        ILogger log)
  {
    log.LogInformation("HTTP trigger function processed a request.");

    string name = req.Query["name"];

    var requestBody = await new StreamReader(req.Body).ReadToEndAsync();
    dynamic data = JsonConvert.DeserializeObject(requestBody);
    name = name ?? data?.name;

    var responseMessage = string.IsNullOrEmpty(name) 
      ? "Pass a name in the query string or in the request body for a personalized response."
            : $"Hello, {name}. This HTTP triggered function executed successfully.";

    return new OkObjectResult(responseMessage);
  }
}

We can actually run this and debug via Visual Studio in the normal way. We’ll get a URL supplied, something like this http://localhost:7071/api/Example to access our function.

As you can see from the above code, we’ll get passed an ILogger and an HttpRequest. From this we can get query parameters, so this URL above would be used like this http://localhost:7071/api/Example?name=PutridParrot

Ofcourse the whole purpose of the Azure Function is for it to run on Azure. To publish it…

  • From Visual Studio, right mouse click on the project and select Publish
  • For the target, select Azure. Click Next
  • Select Azure Function App (Windows) or Linux if you prefer. Click Next again
  • Either select a Function instance if one already exist or you can create a new instance from this wizard page

If you’re creating a new instance, select the resource group etc. as usual and then click Create when ready.

Note: I chose Consumption plan, which is the default when creating an Azure Functions instance. This is basically a “pay only for executions of your functions app”, so should be the cheapest plan.

The next step is to Finish the publish process. If all went well you’ll see everything configures and you can close the Publish dialog.

From the Azure dashboard you can simply type into the search textbox Function App and you should see the published function with a status of Running. If you click on the function name it will show you the current status of the function as well as it’s URL which we can access like we did with localhost, i.e.

https://myfunctionssomewhere.azurewebsites.net/api/Example?name=PutridParrot

Blazor and the GetFromJsonAsync exception TypeError: Failed to Fetch

I have an Azure hosted web api. I also have a simple Blazor standalone application that’s meant to call the API to get a list of categories to display. i.e. the Blazor app is meant to call the Azure web api, fetch the data and display it – should be easy enough, right ?

The web api can easily accessed via a web browser or a console app using the .NET HttpClient, but the Blazor code using the following simply kept throwing exception with the cryptic message “TypeError: Failed to Fetch”

@inject HttpClient Http

// Blazor and other code

protected override async Task OnInitializedAsync()
{
   try
   {
      _categories = await Http.GetFromJsonAsync<string[]>("categories");
   }
   catch (Exception e)
   {
      Debug.WriteLine(e);
   }
}

What was happening is I was actually getting a CORS error, sadly not really reported via the exception so no exactly obvious.

If you get this error interacting with your web api via Blazor then go to the Azure dashboard. I’m running my web api as a container app, type CORS into the left search bar of the resource (in my case a Container App). you should see the Settings section CORS subsection.

Add * to the Allowed Origins and click apply.

Now your Blazor app should be able to interact with the Azure web api app.

Azure Container apps

Azure offers a Kubernetes solution, which we looked at in the post Running and deploying to Azure Kubernetes and also a solution called simply Azure Container Apps.

In fact Container apps are built upon Kubernetes, just think of them as a simplification layer on top of k8s.

The main difference between the Kubernetes offering and container apps is exactly that – simplicity. You don’t get the managed infrastructure with container apps. Container apps are essentially a “serverless” solution. Container apps. also have the ability to scale, not just on CPU or memory usage but also on HTTP requests and events, such as those from the Azure Service Bus. So for example, if there are no items in the service bus queue then containers apps can scale down.

Let’s create our Container App…

  • Either search for Container App in the Azure dashboard or Create a resource then from the Containers category select Container App
  • As usual select or create a resource group
  • Give you container app a name, mine’s test-container
  • Set the region etc.
  • Select the Container tab and uncheck the Use quickstart image as we’re use the Azure registry where we pushed our images to in the previous post Running and deploying to Azure Kubernetes
  • Set the Registry to your Azure registry OR the Docker registry. If you get Cannot access ACR XXX because admin credentials on the ACR are disabled. then goto to your Azure registry and select Access Keys where you can enable Admin user – if you have to do this step you’ll probably have to start the creation process over again.
  • Now select an image from your registry and the image tag
  • To expose our service we’ll now select the Ingress tab and tick Enabled, leave as Limited to Container Apps Environment checked OR if you want to expose your app to the world then endure Accepting traffic from anywhere is checked. Now set Target port to whatever you want, I’m going with the standard port 80

Now click Review + create then when you’re happy with the review, click the Create button.

When completed an Application Url should be created. We set the Ingress as Limited to Container App Environment, so this will not be available to the outside world.

If you have more services to add, then add another container app, search the Dashboard for Container Apps Environments select the environment that was created by Azure then select the Apps | Apps option from the left hand navigation bar. From here we can go through the same process as above and add further apps.

Once created, select the Application Url for the container and this should now be accessible internally or via the web depending on what ingress traffic option you chose.

Working with Kafka host via Docker from a C# client

Let’s take a look at running a test instance (as single instance) of Kafka and write a producer and consumer application in C# to interact with it. As is my preference, we’ll use docker to run up our instance of Kafka and in my case this is running on an Ubuntu server.

Kafka running in Docker

We’ll start with the simplest docker compose file we can. So create the file docker-compose.yml and paste the following into it

version: '3'
services:
  zookeeper:
    image: confluentinc/cp-zookeeper:latest
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000

  kafka:
    image: confluentinc/cp-kafka:latest
    depends_on:
      - zookeeper
    ports:
      - 29092:29092
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://192.168.0.1:29092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1

This is very simple, it’s running a single instance of Kafka (which is only really likely to be something we’d use for testing). Kafka uses Zookeeper (although I believe that dependency may have gone or easy potentially going away), so we have Zookeeper running as well.

In the above file we’re setting the PLAINTEXT_HOST to the machine running the instance of Kafka, obvious this is not ideal so we can change this first to allow the environment to be supplied by either an environment variable of via a .env file. For this example let’s change that line to

KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://${HOST}:29092

just add a .env file in the same location as the docker compose file, and have something like this in it

HOST=192.168.0.1

Now we can run the Kakfa and Zookeeper up using

docker-compose up -d

Remove the -d if you want to watch the log, which I would recommend to at least feel like things are running as expected. Also you can always run docker-compose ps to check that the services are running successfully

C# Producer

We’ll create a console application that will simply send some messages to a topic, it’s our producer. Here’s my Producer.csproj

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>net8.0</TargetFramework>
    <ImplicitUsings>enable</ImplicitUsings>
    <Nullable>enable</Nullable>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="Confluent.Kafka" Version="2.3.0" />
  </ItemGroup>

</Project>

Whilst we can read configuration for Kafka from an INI file or the like’s but for simplicity we’ll handle these in code. So here’s a very basic sample of a producer (this is heavily based on the Confluent Kafka example)

using Confluent.Kafka;

var config = new List<KeyValuePair<string, string>>
{
    new("bootstrap.servers", "192.168.0.1:19092"),
    new("client.id", "my-producer")
};

const string topic = "my-topic";

string[] tickers = { "AAPL", "GOOGL", "MSFT", "AMZN", "META", "TSLA", "GS" };
string[] trades = { "Buy 100", "Sell 1000", "Buy 9090", "Sell 45", "Buy 900000", "Sell 123", "Buy 8901" };

using var producer = new ProducerBuilder<string, string>(config).Build();

var rnd = new Random();

for (var i = 0; i < 10; ++i)
{
    var ticker = tickers[rnd.Next(tickers.Length)];
    var trade = trades[rnd.Next(trades.Length)];

    producer.Produce(topic, new Message<string, string> { Key = ticker, Value = trade },
        deliveryReport =>
        {
            if (deliveryReport.Error.Code != ErrorCode.NoError)
            {
                Console.WriteLine($"Error sending event: {deliveryReport.Error.Reason}");
            }
            else
            {
                Console.WriteLine($"Sent event topic = {topic}: key = {ticker} value = {trade}");
            }
        });
}

producer.Flush(TimeSpan.FromSeconds(10));

In the above we’re creating a configuration, with reference to our bootstrap server with a unique client.id. We also need a topic which should be unique and will need to be known by the consumers who want to fetch events for a given topic.

In this example we create a batch of simple string key, string value events and the build the producer object. Then we just randomly pick a ticker and assign a trade against it and send that event to Kakfa.

C# Consumer

Obviously we’re going to want to fetch these events at some point. We do this via a consumer. Once events are added to Kafka (and depending upon it’s setup/configuration) these event will “play” to a consumer that attaches to the correct topic. Once the events are received by the consumer they will not be replayed again, unless we explicitly force Kafka to do so.

Again this example is based heavily on the Confluent Kafka C# consumer. Create a Console application and replace the contents of the .csproj with the same csproj listed earlier for the Producer – this is just adding the relevant client package. Here’s the code for our Console based consumer

using Confluent.Kafka;

var config = new List<KeyValuePair<string, string>>
        {
            new("bootstrap.servers", "192.168.0.1:19092"),
            new("group.id", "my-group"),
            new("auto.offset.reset", "earliest")
        };

const string topic = "my-topic";

var cts = new CancellationTokenSource();
Console.CancelKeyPress += (_, e) =>
{
    e.Cancel = true; // prevent the process from terminating.
    cts.Cancel();
};

using var consumer = new ConsumerBuilder<string, string>(config).Build();

consumer.Subscribe(topic);
try
{
    while (true)
    {
        var cr = consumer.Consume(cts.Token);
        Console.WriteLine($"Consumed event, topic {topic}: key = {cr.Message.Key} value = {cr.Message.Value}");
    }
}
catch (OperationCanceledException)
{
    // Ctrl-C was pressed.
}
finally
{
    consumer.Close();
}

There’s a little more here than required, just to keep the consumer running and watching for events. In a service we ofcourse wouldn’t need half of this code.

Essentially we create a configurations which tells Kafka that consumer has a group.id (this is mandatory) and where we want the offset to reset to, for playing the events from. In other words, this example will connect to Kafka and only consume events it hasn’t already consumed. It will not replay events from the first to last.

If, and I’ve found it useful in some debugging situations, but it may be required in real world application, we wish to get ALL events, then we change the ConsumerBuilder line to the following

using var consumer = new ConsumerBuilder<string, string>(config)
    .SetPartitionsAssignedHandler((c, partitions) =>
    {
        // reset the offsets for this client
        var offsets = partitions.Select(tp => new TopicPartitionOffset(tp, Offset.Beginning));
        return offsets;
    })
    .Build();

Multiple brokers

A single Kafka broker is fine for testing, but Kafa was designed for multiple brokers, here’s a docker compose file that takes out single instance and add’s two more to create three Kafka brokers (I think this is often viewed as the minimal for production, but don’t quote me on that)

version: '3'
services:
  zookeeper:
    image: confluentinc/cp-zookeeper:latest
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000

  kafka-broker1:
    image: confluentinc/cp-kafka:latest
    hostname: kafka-broker1
    depends_on:
      - zookeeper
    ports:
      - 19092:19092
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka-broker1:9092,PLAINTEXT_HOST://${HOST}:19092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 3
  kafka-broker2:
    image: confluentinc/cp-kafka:latest
    hostname: kafka-broker2
    depends_on:
      - zookeeper
    ports:
      - 29092:29092
    environment:
      KAFKA_BROKER_ID: 2
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka-broker2:9092,PLAINTEXT_HOST://${HOST}:29092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 3
  kafka-broker3:
    image: confluentinc/cp-kafka:latest
    hostname: kafka-broker3
    depends_on:
      - zookeeper
    ports:
      - 39092:39092
    environment:
      KAFKA_BROKER_ID: 3
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka-broker3:9092,PLAINTEXT_HOST://${HOST}:39092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 3

We’ve added the following, a KAFKA_BROKER_ID and the KAFKA_ADVERTISED_LISTENERS which references the newly added hostname. Just run up in docker-compose and the previous client code should work happily against this setup.

Code etc.

Code and docker compose files are available as part of my github blog-projects repo.

Running Kibana from Docker

In my previous post I showed how we get Elasticsearch up and running, we also created a docker network like this

docker network create kibana-network

Like Elasticsearch in docker, we need to include the actual tag, so run the following

docker run -d --name kibana --net kibana-network -p 5601:5601 kibana:8.10.4

Now navigate to to you Kibana service using

http://192.168.0.88:5601/

If all goes well, Kibana will automatically locate Elasticsearch (if it’s on the same server). I’ve not tried host it on a different server but it looks like we need to get a “key” from Elasticsearch and supply that to Kibana to connect to it. For now, let’s just stick with both being hosted on a single server.

We can check the features of Kibana using

curl -X GET 'localhost:5601/api/features'

If you set the index and added some data into Elasticsearch as per the previous post, you should now be able to select the index via the Discover option in the web UI and view the data we entered via our Kibana instance.

Running Elasticsearch in a Docker container

If you want to run elasticsearch in docker you need to use a specific tag, the latest (at the time of writing) is 8.10.4, so let’s start by pulling that tag using

docker pull elasticsearch:8.10.4

I’ve going to connect Kibana to this instance later, so let’s create a network as for the two to work with. I’m calling mine kibana-network.

docker network create kibana-network

Once completed, run the following to start up elastic search.

Note: xpack.security.enabled=false turns off https for testing locally

docker run -d 
   --name elasticsearch 
   --net kibana-network 
   -p 9200:9200 
   -p 9300:9300 
   -e "discovery.type=single-node" 
   -e "xpack.security.enabled=false" 
   elasticsearch:8.10.4

We want to check this is working so let’s use CURL to call the elastic search instance, i.e.

curl -X GET http://localhost:9200/_cat/nodes?v

Or from you browser

http://localhost:9200/_cat/health

If all worked, we should see something like

1699220835 21:47:15 docker-cluster yellow 1 1 28 28 0 0 1 0 - 96.6%

Before we move on let’s add some data into our instance, we’ll start by adding an index (again we’ll use CURL),

curl -X PUT http://localhost:9200/myservice

We should see a response which looks something like this

response: {"acknowledged":true,"shards_acknowledged":true,"index":"myservice"}

Now to add some initial data

curl -X POST -H 'Content-Type: application/json' -d '{ "name": "Debug", "description": "This is a debug message", "code": 1, "id": 2}' 

If you add a few more entries then we can try a query via CURL to locate this one

curl -X GET "localhost:9200/myservice/_search?pretty" -H 'Content-Type: application/json' -d' { "query": { "match": { "id": "2" } } }'

I want row automation id’s on my XamDataGrid…

As part of work I’m doing at the moment, building a UI automation API for our testers. I continually come across issues around data grid controls and access the rows within it (we’re primarily using XamDataGrids from Infragistics).

What I need is to have an AutomationId reflecting some form of index in the grid. Good news is we can do this…

If we take a XamaDataGrid and create a style such as this

<Style x:Key="RowPresenterStyle" TargetType="igDP:DataRecordPresenter">
  <Setter Property="AutomationProperties.AutomationId" Value="{Binding DataItemIndex}" />
</Style>

and now in the XamDataGrid’s FieldLayoutSettings we can apply this style using

<igDP:XamaDataGrid.FieldLayoutSettings>
  <igDP:FieldLayoutSettings
     <!-- Other settings -->
     DataRecordPresenterStyle="{StaticResource RowPresenterStyle}" />
</igDP:XamaDataGrid.FieldLayoutSettings>

Primary Constructors are coming in C# 12 to classes and structs

Available as part of Visual Studio 17.6 preview 2. C# will be adding primary constructors.

Primary constructors already exist (as such) for records, but can be added to classes and structs, so the syntax

public class Person(string firstName, string lastName, int age);

will be equivalent to

public class Person
{
   private readonly string firstName;
   private readonly string lastName;
   private readonly int age;

   public Person(string firstName, string lastName, int age)
   {
      this.firstName = firstName;
      this.lastName = lastName;
      this.age = age;
   }
}

By using a primary constructor the compiler will no longer generate a default (parameterless) constructor. You can ofcourse add your own but you’ll then need to call the primary constructor, for example

class Person(string firstName, string lastName, int age)
{
   public Person() :
      this("", "", 0)
   {
   }
}

An obvious syntactic difference between a class/struct primary constructor and a record’s is the record parameters are public, so we would tend to use property (Pascal Case) naming conventions and the properties are exposed as public readonly properties. For the class/struct these parameters map to private fields hence we use camel Case (if following the standards).

Note, you cannot access them using this.firstName. This statement might seem slightly confusing because whilst you cannot, for example, write the following

public Person() : 
   this("", "", 0)
{
   // this will not even compile
   this.firstName = "Test";
   // also will not compile
   firstName = "Test";
}

You can do things like the following

class Person(string firstName, string lastName, int age)
{
    public string FirstName
    {
        get => firstName;
        set => firstName = value;
    }

    public override string ToString() => $"{firstName} {lastName} {age}";
}

Essentially your primary constructor parameters are not available in overloaded constructors or using the this. syntax.

Trying out bun

You can never get too complacent with the JavaScript eco-system, no sooner do you start to feel comfortable than something else becomes the new hotness, this time it’s bun. Bun is a JavaScript runtime, pretty much analogous to Node.js, so basically a potential replacement for Node.js.

Setting things up

The Installation page gives details on setting up for various OS’s, Windows is not well supported at the moment (i.e. experimental release), but as I love devcontainers in VSCode, we’ll just set up a devcontainer to use the docker image.

  • Create a folder for your (yes we’re going to do it) hello world app, mine’s named hello-world
  • Create folder named .devcontainer (for more info. see my post Visual Code with vscontainers)
  • Create a file in .devcontainer named devcontainer.json and put the following code in it
    {
      "image": "oven/bun",
      "forwardPorts": [3000]
    }
    

Now open Visual Code on the folder hello-world (or whatever you named it) and VS Code will hopefully ask if you want to open the folder as a devcontainer, obviously say yes and it’ll set up the docker image for you and then you’ll be working in a devcontainer with bun.

Is it working?

If you carried the steps above (or installed by one of the other means) we now want to check bun is working. From your terminal either on the devcontainer or any terminal if you installed it on your machine or container, run the command

bun

You should see a list of commands etc. If not, check through your installation and in the case of the devcontainer make sure you’re using the terminal in VS Code and it’s showing your root folder of the docker image.

Getting Started

Hopefully everything is running, so we need to create something, so run

bun init

Then fill in the options, mine are using all the defaults, but I’ll then them below anyway

  • package name hello-world
  • entry point index.ts

and that’s all there’s is so it. Bun will create and index.ts file, .gitignore, tsconfig.json, package.json and READEME.md. The index.ts looks like this

console.log("Hello via Bun!");

Let’s run this using

bun run index.ts

As you’d expect we’re seeing Hello via Bun! in our terminal window.

I want a server!

Most, if not all my work with Node.js was writing server based code, so let’s take the example from the Bun Quick Start page and fire up a little server app.

const server = Bun.serve({
  port: 3000,
  fetch(req) {
    return new Response("Hello World from the server");
  },
});

console.log(`Listening on http://localhost:${server.port} ...`);

We’re using port 3000 which we also have listed in our devcontainer.json within forwardPorts. So running index.ts from bun will start the server on port 3000 and VS Code (in my case) will ask if I wish to open the browser for that port. If you’re using a different method then simply open your browser with URL http://localhost:3000/.

We can ofcourse put scripts into our package.json, to save us some typing, for example adding

"scripts": {
    "start": "bun run index.ts"
  },

Now we can use the command

bun run start

Everything else should pretty much work as Node.js, but it’s reported that bun (which essentially also includes package management tooling) is quite a bit faster than npm and yarn, also it includes hot reload out of the box, i.e. we don’t need to use Nodemon. It automatically transpiles TypeScript and JSX. So it’s very much an “all in one” solution to Node.js style development.

So that’s a really quick run through and a reminder to myself how to get bun up and running as quickly as possible.