Category Archives: Programming

Azure Functions

Azure functions (like AWS lambdas and GCP cloud functions) allow us to write serverless code literally just as functions, i.e. no need to fire up a web application or VM. Ofcourse just like Azure containers, there is a server component but we, the developer, need not concerns ourselves with handling configuration etc.

Azure functions will be spun up as and when required, meaning we will only be charged when they’re used. The downside of this is they have to spin up from a “cold” state. In other words the first person to hit your function will likely incur a performance hit whilst the function is started then invoked.

The other thing to remember is Azure functions are stateless. You might store state with a DB like CosmoDB, but essentially a function is invoked, does something then after a timeout period it’s shut back down.

Let’s create an example function and see how things work…

  • Create a new Azure Functions project
  • When you get to the options for the Function, select Http trigger and select Amonymous Authorization level
  • Complete the wizard by clicking the Create button

The Authorization level allows the function to be triggered without providing a key. The HTTP trigger, as it sounds, means the function is triggered by an HTTP request.

The following is basically the code that’s created from the Azure Function template

public static class ExampleFunction
{
  [FunctionName("Example")]
  public static async Task<IActionResult> Run(
        [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
        ILogger log)
  {
    log.LogInformation("HTTP trigger function processed a request.");

    string name = req.Query["name"];

    var requestBody = await new StreamReader(req.Body).ReadToEndAsync();
    dynamic data = JsonConvert.DeserializeObject(requestBody);
    name = name ?? data?.name;

    var responseMessage = string.IsNullOrEmpty(name) 
      ? "Pass a name in the query string or in the request body for a personalized response."
            : $"Hello, {name}. This HTTP triggered function executed successfully.";

    return new OkObjectResult(responseMessage);
  }
}

We can actually run this and debug via Visual Studio in the normal way. We’ll get a URL supplied, something like this http://localhost:7071/api/Example to access our function.

As you can see from the above code, we’ll get passed an ILogger and an HttpRequest. From this we can get query parameters, so this URL above would be used like this http://localhost:7071/api/Example?name=PutridParrot

Ofcourse the whole purpose of the Azure Function is for it to run on Azure. To publish it…

  • From Visual Studio, right mouse click on the project and select Publish
  • For the target, select Azure. Click Next
  • Select Azure Function App (Windows) or Linux if you prefer. Click Next again
  • Either select a Function instance if one already exist or you can create a new instance from this wizard page

If you’re creating a new instance, select the resource group etc. as usual and then click Create when ready.

Note: I chose Consumption plan, which is the default when creating an Azure Functions instance. This is basically a “pay only for executions of your functions app”, so should be the cheapest plan.

The next step is to Finish the publish process. If all went well you’ll see everything configures and you can close the Publish dialog.

From the Azure dashboard you can simply type into the search textbox Function App and you should see the published function with a status of Running. If you click on the function name it will show you the current status of the function as well as it’s URL which we can access like we did with localhost, i.e.

https://myfunctionssomewhere.azurewebsites.net/api/Example?name=PutridParrot

Blazor and the GetFromJsonAsync exception TypeError: Failed to Fetch

I have an Azure hosted web api. I also have a simple Blazor standalone application that’s meant to call the API to get a list of categories to display. i.e. the Blazor app is meant to call the Azure web api, fetch the data and display it – should be easy enough, right ?

The web api can easily accessed via a web browser or a console app using the .NET HttpClient, but the Blazor code using the following simply kept throwing an exception with the cryptic message “TypeError: Failed to Fetch”

@inject HttpClient Http

// Blazor and other code

protected override async Task OnInitializedAsync()
{
   try
   {
      _categories = await Http.GetFromJsonAsync<string[]>("categories");
   }
   catch (Exception e)
   {
      Debug.WriteLine(e);
   }
}

What was happening is I was actually getting a CORS error, sadly not really reported via the exception so not exactly obvious.

If you get this error interacting with your web api via Blazor then go to the Azure dashboard. I’m running my web api as a container app, type CORS into the left search bar of the resource (in my case a Container App). you should see the Settings section CORS subsection.

Add * to the Allowed Origins and click apply.

Now your Blazor app should be able to interact with the Azure web api app.

Working with Kafka host via Docker from a C# client

Let’s take a look at running a test instance (as single instance) of Kafka and write a producer and consumer application in C# to interact with it. As is my preference, we’ll use docker to run up our instance of Kafka and in my case this is running on an Ubuntu server.

Kafka running in Docker

We’ll start with the simplest docker compose file we can. So create the file docker-compose.yml and paste the following into it

version: '3'
services:
  zookeeper:
    image: confluentinc/cp-zookeeper:latest
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000

  kafka:
    image: confluentinc/cp-kafka:latest
    depends_on:
      - zookeeper
    ports:
      - 29092:29092
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://192.168.0.1:29092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1

This is very simple, it’s running a single instance of Kafka (which is only really likely to be something we’d use for testing). Kafka uses Zookeeper (although I believe that dependency may have gone or easy potentially going away), so we have Zookeeper running as well.

In the above file we’re setting the PLAINTEXT_HOST to the machine running the instance of Kafka, obvious this is not ideal so we can change this first to allow the environment to be supplied by either an environment variable of via a .env file. For this example let’s change that line to

KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://${HOST}:29092

just add a .env file in the same location as the docker compose file, and have something like this in it

HOST=192.168.0.1

Now we can run the Kakfa and Zookeeper up using

docker-compose up -d

Remove the -d if you want to watch the log, which I would recommend to at least feel like things are running as expected. Also you can always run docker-compose ps to check that the services are running successfully

C# Producer

We’ll create a console application that will simply send some messages to a topic, it’s our producer. Here’s my Producer.csproj

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>net8.0</TargetFramework>
    <ImplicitUsings>enable</ImplicitUsings>
    <Nullable>enable</Nullable>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="Confluent.Kafka" Version="2.3.0" />
  </ItemGroup>

</Project>

Whilst we can read configuration for Kafka from an INI file or the like’s but for simplicity we’ll handle these in code. So here’s a very basic sample of a producer (this is heavily based on the Confluent Kafka example)

using Confluent.Kafka;

var config = new List<KeyValuePair<string, string>>
{
    new("bootstrap.servers", "192.168.0.1:19092"),
    new("client.id", "my-producer")
};

const string topic = "my-topic";

string[] tickers = { "AAPL", "GOOGL", "MSFT", "AMZN", "META", "TSLA", "GS" };
string[] trades = { "Buy 100", "Sell 1000", "Buy 9090", "Sell 45", "Buy 900000", "Sell 123", "Buy 8901" };

using var producer = new ProducerBuilder<string, string>(config).Build();

var rnd = new Random();

for (var i = 0; i < 10; ++i)
{
    var ticker = tickers[rnd.Next(tickers.Length)];
    var trade = trades[rnd.Next(trades.Length)];

    producer.Produce(topic, new Message<string, string> { Key = ticker, Value = trade },
        deliveryReport =>
        {
            if (deliveryReport.Error.Code != ErrorCode.NoError)
            {
                Console.WriteLine($"Error sending event: {deliveryReport.Error.Reason}");
            }
            else
            {
                Console.WriteLine($"Sent event topic = {topic}: key = {ticker} value = {trade}");
            }
        });
}

producer.Flush(TimeSpan.FromSeconds(10));

In the above we’re creating a configuration, with reference to our bootstrap server with a unique client.id. We also need a topic which should be unique and will need to be known by the consumers who want to fetch events for a given topic.

In this example we create a batch of simple string key, string value events and the build the producer object. Then we just randomly pick a ticker and assign a trade against it and send that event to Kakfa.

C# Consumer

Obviously we’re going to want to fetch these events at some point. We do this via a consumer. Once events are added to Kafka (and depending upon it’s setup/configuration) these event will “play” to a consumer that attaches to the correct topic. Once the events are received by the consumer they will not be replayed again, unless we explicitly force Kafka to do so.

Again this example is based heavily on the Confluent Kafka C# consumer. Create a Console application and replace the contents of the .csproj with the same csproj listed earlier for the Producer – this is just adding the relevant client package. Here’s the code for our Console based consumer

using Confluent.Kafka;

var config = new List<KeyValuePair<string, string>>
        {
            new("bootstrap.servers", "192.168.0.1:19092"),
            new("group.id", "my-group"),
            new("auto.offset.reset", "earliest")
        };

const string topic = "my-topic";

var cts = new CancellationTokenSource();
Console.CancelKeyPress += (_, e) =>
{
    e.Cancel = true; // prevent the process from terminating.
    cts.Cancel();
};

using var consumer = new ConsumerBuilder<string, string>(config).Build();

consumer.Subscribe(topic);
try
{
    while (true)
    {
        var cr = consumer.Consume(cts.Token);
        Console.WriteLine($"Consumed event, topic {topic}: key = {cr.Message.Key} value = {cr.Message.Value}");
    }
}
catch (OperationCanceledException)
{
    // Ctrl-C was pressed.
}
finally
{
    consumer.Close();
}

There’s a little more here than required, just to keep the consumer running and watching for events. In a service we ofcourse wouldn’t need half of this code.

Essentially we create a configurations which tells Kafka that consumer has a group.id (this is mandatory) and where we want the offset to reset to, for playing the events from. In other words, this example will connect to Kafka and only consume events it hasn’t already consumed. It will not replay events from the first to last.

If, and I’ve found it useful in some debugging situations, but it may be required in real world application, we wish to get ALL events, then we change the ConsumerBuilder line to the following

using var consumer = new ConsumerBuilder<string, string>(config)
    .SetPartitionsAssignedHandler((c, partitions) =>
    {
        // reset the offsets for this client
        var offsets = partitions.Select(tp => new TopicPartitionOffset(tp, Offset.Beginning));
        return offsets;
    })
    .Build();

Multiple brokers

A single Kafka broker is fine for testing, but Kafa was designed for multiple brokers, here’s a docker compose file that takes out single instance and add’s two more to create three Kafka brokers (I think this is often viewed as the minimal for production, but don’t quote me on that)

version: '3'
services:
  zookeeper:
    image: confluentinc/cp-zookeeper:latest
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000

  kafka-broker1:
    image: confluentinc/cp-kafka:latest
    hostname: kafka-broker1
    depends_on:
      - zookeeper
    ports:
      - 19092:19092
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka-broker1:9092,PLAINTEXT_HOST://${HOST}:19092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 3
  kafka-broker2:
    image: confluentinc/cp-kafka:latest
    hostname: kafka-broker2
    depends_on:
      - zookeeper
    ports:
      - 29092:29092
    environment:
      KAFKA_BROKER_ID: 2
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka-broker2:9092,PLAINTEXT_HOST://${HOST}:29092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 3
  kafka-broker3:
    image: confluentinc/cp-kafka:latest
    hostname: kafka-broker3
    depends_on:
      - zookeeper
    ports:
      - 39092:39092
    environment:
      KAFKA_BROKER_ID: 3
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka-broker3:9092,PLAINTEXT_HOST://${HOST}:39092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 3

We’ve added the following, a KAFKA_BROKER_ID and the KAFKA_ADVERTISED_LISTENERS which references the newly added hostname. Just run up in docker-compose and the previous client code should work happily against this setup.

Code etc.

Code and docker compose files are available as part of my github blog-projects repo.

I want row automation id’s on my XamDataGrid…

As part of work I’m doing at the moment, building a UI automation API for our testers. I continually come across issues around data grid controls and access the rows within it (we’re primarily using XamDataGrids from Infragistics).

What I need is to have an AutomationId reflecting some form of index in the grid. Good news is we can do this…

If we take a XamaDataGrid and create a style such as this

<Style x:Key="RowPresenterStyle" TargetType="igDP:DataRecordPresenter">
  <Setter Property="AutomationProperties.AutomationId" Value="{Binding DataItemIndex}" />
</Style>

and now in the XamDataGrid’s FieldLayoutSettings we can apply this style using

<igDP:XamaDataGrid.FieldLayoutSettings>
  <igDP:FieldLayoutSettings
     <!-- Other settings -->
     DataRecordPresenterStyle="{StaticResource RowPresenterStyle}" />
</igDP:XamaDataGrid.FieldLayoutSettings>

Primary Constructors are coming in C# 12 to classes and structs

Available as part of Visual Studio 17.6 preview 2. C# will be adding primary constructors.

Primary constructors already exist (as such) for records, but can be added to classes and structs, so the syntax

public class Person(string firstName, string lastName, int age);

will be equivalent to

public class Person
{
   private readonly string firstName;
   private readonly string lastName;
   private readonly int age;

   public Person(string firstName, string lastName, int age)
   {
      this.firstName = firstName;
      this.lastName = lastName;
      this.age = age;
   }
}

By using a primary constructor the compiler will no longer generate a default (parameterless) constructor. You can ofcourse add your own but you’ll then need to call the primary constructor, for example

class Person(string firstName, string lastName, int age)
{
   public Person() :
      this("", "", 0)
   {
   }
}

An obvious syntactic difference between a class/struct primary constructor and a record’s is the record parameters are public, so we would tend to use property (Pascal Case) naming conventions and the properties are exposed as public readonly properties. For the class/struct these parameters map to private fields hence we use camel Case (if following the standards).

Note, you cannot access them using this.firstName. This statement might seem slightly confusing because whilst you cannot, for example, write the following

public Person() : 
   this("", "", 0)
{
   // this will not even compile
   this.firstName = "Test";
   // also will not compile
   firstName = "Test";
}

You can do things like the following

class Person(string firstName, string lastName, int age)
{
    public string FirstName
    {
        get => firstName;
        set => firstName = value;
    }

    public override string ToString() => $"{firstName} {lastName} {age}";
}

Essentially your primary constructor parameters are not available in overloaded constructors or using the this. syntax.

Running and deploying to Azure Kubernetes

We’re going to be deploying our web services to k8s using Docker, so first off we need to create a registry (if you don’t already have one) on Azure.

  • Go to the Azure Dashboard and select Create a resource
  • Locate and click on Container Registry
  • Click on Create
  • Supply the Resource Group you want to use, I’m using the one I created for my previous post Creating and using the Azure Service Bus
  • Create a registry name, mines apptestregistry
  • Select your location and SKU, I’ve gone Basic on the SKU

Now click Review + create. Review your options and if all looks correct then click Create. Now wait for the Deployment to complete and go to the resource’s dashboard where you’ll see (at least on the current Dashboard) options to Push a Container image, Deployment a container image etc.

Adding a Kubernetes service

We now need to return to the main dashboard to select Containers | Kubernetes services or just type Kubernetes services into the Dashboard search bar.

  • In Kubernetes services click Create
  • Click Create a Kubernetes cluster
  • Supply a Resource Group
  • For Cluster preset configuration choose Dev/Test for now
  • Enter a Kubernetes cluster name, mine’s testappcluster
  • Fill in the rest of the options to suite your location etc.

Now click Review + create.

Stop, don’t press create yet. Before we click create, go to the Integrations tab and set the Container Registry to the one we created – if you don’t do this then you’ll get 401’s when trying to deploy from your registry into K8s.

Note: There is a way to create this integration between k8s and your registry later, but it is so much simpler letting the Dashboard do the work for us.

Now, review your options and if happy all looks correct, click Create.

Note: I kept getting an error around quota’s on the setup above, I found if you reduce the autoscaling slider/values (as mine showed it would be maxed out) then this should pass the review phase.

Once the deployment in complete (and it may take a little while) we’ll need something to push to it…

Creating a simple set of microservices

  • Using Visual Studio, create a new ASP.NET Core Web API, make sure to have Docker support checked
  • Delete the Weatherforecast controller and domain object
  • Right mouse click on Controllers and select Add | Controller
  • Select an empty controller

Note: We’re going to use the Azure CLI tools, so if you’ve not got the latest, go to https://learn.microsoft.com/en-us/cli/azure/install-azure-cli-windows?tabs=azure-cli#install-or-update and install from here

Now let’s dockerize our WebApi…

Note: Replace apptestregistry with your registry name and I’m not going to include th az login etc. and my service within the registry is called myname, so replace with something meaningful. Also ver01: is the version I’m assigning to the image.

  • Copy your Docker file for your WebApi into the solution folder of your project
  • Run az acr build -t myname:ver01 -r apptestregistry .

That’s all there is to it, repeat for each WebApi you want to deploy.

To check all went well. We’re going to use the following az command to list the Azure registry (change the registry name to yours)

az acr repository list --name apptestregistry

Or you can go to the Azure container registry dashboard and select Repositories

Deploying to Kubernetes

At this point we’ve created our WebApi, dockerized it and deployed to the Azure registry of our choice, but to get it into k8s we need to create the k8s manifests.

We create a YML file, for example kubernetes-manifest-myname.yaml (change myname to the name of your service as well is with the YML below, don’t forget to change apptestregistry to your registry name also)

apiVersion: apps/v1 
kind: Deployment 
metadata: 
  name: myname
  labels: 
    app: myname
spec: 
  selector: 
    matchLabels: 
      app: myname 
  replicas: 1 
  template: 
    metadata: 
      labels: 
        app: myname
    spec: 
      containers: 
      - name: myname
        image: apptestregistry.azurecr.io/myname:ver01 
        resources: 
          requests: 
            cpu: 100m 
            memory: 100Mi 
          limits: 
            cpu: 200m 
            memory: 200Mi 
        ports: 
        - containerPort: 80 
--- 
apiVersion: v1 
kind: Service 
metadata: 
  name: myname
spec:
  ports: 
  - port: 80 
  selector: 
    app: myname
---

We need to set up kubectl access from Terminal/PowerShell using

az aks get-credentials -n testappcluster -g test-app-cluster
kubectl apply -f .\kubernetes-your-manifest-file-yaml

We can check that the pods exist using

kubectl get pods

If you notice Status or ImagePullBackOff then it’s possible you’ve not set up the integration of k8s and your registry, to be sure type “kubectl describe pod” along with your pod name to get more informations.

and finally, let’s check the status of our services using

kubectl get services

In the Azure Kubernetes services dashboard you can select Services and ingresses to view the newly added service(s).

We can see Cluster IP address which is an internal IP address. So for example if our myname service has Cluster IP 10.0.40.100 then other services deployed to the cluster can interact with the myname service by this IP.

External facing service

We’ve created an service which is hosted in a pod within k8s but we have no external interface to it. A simple way to create an external IP from our service is by declaring a service as a load balancer which routes calls to the various services we’ve deployed.

So let’s assume we created a new WebApi, with Docker support and we added the following controller which routes operations to a single endpoint in this case, but ofcourse it could route to any WebApi that we’ve deployed via it’s Cluster IP

[ApiController]
[Route("[controller]")]
public class MathController : Controller
{
    [HttpGet("Get")]
    public async Task<string> Get(string op)
    {
        var httpClient = new HttpClient();
        using var response = await httpClient.GetAsync($"http://10.0.40.100/myname/get?operation={op}");
        return await response.Content.ReadAsStringAsync();
    }
}

Once we have an External IP associate with this load balance type of service, then we can access from the web i.e. http://{external-ip}/myservice/get?op=display

Creating and using the Azure Service Bus

The Azure Service Bus is not much different to every other service bus out there, i.e. we send messages to it and other applications or services receive those messages by pulling the messages off the bus or monitoring it.

Let’s set up an Azure Service bus.

We’ll use the Azure Dashboard (the instructions below are correct as per the Dashboard at the time of writing).

  • Type Service Bus into the search bar of the dashboard or locate the Service Bus from the dashboard buttons etc. if available
  • Click Create then either give the resource group a name (or select an existing). I’ll create new and mine’s going to be called test-app. Create a namespace, mine’s test-app-bus and set the location pricing tier etc. as you wish.
  • Click the Review + create button.
  • Review your settings then if you’re happy, click the Create button

If all went well, you’ll see the deployment in progress. When completed, we need to set up a queue…

  • Click the Go to resource button from the deployment page
  • Click the Queue button
  • The queue name needs to be unique within the namespace, I’ve chosen test-app-queue, although it’s more likely you’ll want to choose a name that really reflects what the purpose of the queue is, for example trades, appointments, orders are some real world names you might prefer to use
  • I’m going to leave all queue options as the default for this example
  • Click the Create button and in a few seconds the queue should be created.

In the dashboard for the Service Bus Namespace you’ll see the queues listed at the bottom of the page. This pages also shows requests count, message count etc.

We’ve not completed everything yet. We need to create a SAS policy for accessing the service bus…

  • From the Service Bus Namespace dashboard, select Entities | Queues select the queue to view the dashboard page Service Bus Queue
  • From here select Settings | Shard access policies
  • Click the Add button
  • We’re going to set the policy up for applications sending messages, so give the policy an appropriate name, such as SenderPolicy and ensure the Send checkbox is checked
  • Finally, click the Create button

If you now click on the policy it will show keys and connection strings. We’ll need the Primary Connection String for our test application.

Note: Obviously these keys need to be kept secure otherwise anyone could interact with your service bus queues.

Creating a test app to send messages

This is all well and good, but let’s now create a little C# test app to send messages to our queue.

  • From Visual Studio create a new project, we’ll just create a Console application for now
  • Add a NuGet reference to Azure.Messaging.ServiceBus

In Program.cs simply copy/paste the following

using Azure.Messaging.ServiceBus;

const string connectionString = "the-send-primary-connection-string";
const string queueName = "test-app-queue";

var serviceBusClient = new ServiceBusClient(connectionString);
var serviceBusSender = serviceBusClient.CreateSender(queueName);

try
{
    using var messageBatch = await serviceBusSender.CreateMessageBatchAsync();

    for (var i = 1; i <= 10; i++)
    {
        if (!messageBatch.TryAddMessage(new ServiceBusMessage($"Message {i}")))
        {
            throw new Exception($"The message {i} is too large to fit in the batch");
        }
    }

    await serviceBusSender.SendMessagesAsync(messageBatch);
    Console.ReadLine();
}
finally
{
    await serviceBusSender.DisposeAsync();
    await serviceBusClient.DisposeAsync();
}

Creating a test app to receive messages

Obviously we will want to receive messages from our service bus, so let’s create another C# console application and copy/paste the following into Program.cs

using Azure.Messaging.ServiceBus;

async Task MessageHandler(ProcessMessageEventArgs args)
{
    var body = args.Message.Body.ToString();
    Console.WriteLine($"Received: {body}");
    await args.CompleteMessageAsync(args.Message);
}

Task ErrorHandler(ProcessErrorEventArgs args)
{
    Console.WriteLine(args.Exception.ToString());
    return Task.CompletedTask;
}

const string connectionString = "the-listen-primary-connection-string";
const string queueName = "test-app-queue";

var serviceBusClient = new ServiceBusClient(connectionString);
var serviceBusProcessor = serviceBusClient.CreateProcessor(queueName, new ServiceBusProcessorOptions());

try
{
    serviceBusProcessor.ProcessMessageAsync += MessageHandler;
    serviceBusProcessor.ProcessErrorAsync += ErrorHandler;

    await serviceBusProcessor.StartProcessingAsync();

    Console.ReadKey();

    await serviceBusProcessor.StopProcessingAsync();
}
finally
{
    await serviceBusProcessor.DisposeAsync();
    await serviceBusClient.DisposeAsync();
}

Before this will work we also need to go back to the Azure Dashboard, go to the Queues section and click on Shared access policies. Along side our SenderPolicy add a new policy, we’ll call it ListenPolicy and check the Listen checkbox. Copy the Primary Connection String to the code above.

This code will listen for messages but in some cases you may wish to just get a single message, in which case you could use this code

using Azure.Messaging.ServiceBus;

const string connectionString = "the-listen-primary-connection-string";
const string queueName = "test-app-queue";

await using var client = new ServiceBusClient(connectionString);

var receiver = client.CreateReceiver(queueName);
var message = await receiver.ReceiveMessageAsync();
var body = message.Body.ToString();

Console.WriteLine(body);

await receiver.CompleteMessageAsync(message);

Converting data to a Table in SpecFlow

The use case for this is, I have a step Set fields on view which takes a table like this

Scenario: With table
  Given Set fields on view 
    | A  | B  | C  |
    | a1 | b1 | c1 |
    | a2 | b2 | c2 |      
    | a3 | b3 | c3 |                            

The code for this step, currently just outputs the data set to it to the console or log file, so looks like this

[Given(@"Set fields on view")]
public void GivenFieldsOnView(Table table)
{
   table.Log();
}

Now in some cases, I want to set fields on a view using an Examples. So in the current case we’re sending multiple rows in one go, but maybe in some situations we want to set fields, one row at a time, so we uses Examples like this

Scenario Outline: Convert To Table
  Given Set fields on view A=<A>, B=<B>, C=<C>

  Examples:
    | A  | B  | C  |
    | a1 | b1 | c1 |
    | a2 | b2 | c2 |      
    | a3 | b3 | c3 |            

We’ll then need a new step that looks like this

[Given(@"Set fields on view (.*)")]
public void GivenFieldsOnView(string s)
{
}

Ofcourse we can now split the variable s by comma, then split by = to get our key values, just like a Table and there’s nothing wrong with this approach, but an alternative is to have this transformation as a StepArgumentTransformation. So our code above would change to

[Given(@"Set fields on view (.*)")]
public void GivenFieldsOnView2(Table table)
{
   table.Log();
}

and now in our hook class we’d have something like this

[Binding]
public class StepTransformer
{
  [StepArgumentTransformation]
  public Table TransformToTable(string input)
  {
    var inputs = input.Split(new[] { ',' }, StringSplitOptions.RemoveEmptyEntries);
    var d = inputs.Select(s =>
      s.Split(new[] { '=' }, StringSplitOptions.RemoveEmptyEntries))
        .ToDictionary(v => v[0], v => v[1]);

    // this only handles a single row 
    var table = new Table(d.Keys.ToArray());
    table.AddRow(d.Values.ToArray());
    return table;
  }
}

Note: This is just an example, with no error handling and will only convert a string to a single row, it’s just a demo at this point.

So, now what we’ve done is create a transformer which understands a string syntax such as K1=V1, K2=V2… and can convert to a table for us.

I know that you’re probably asking, why, we could we not just execute the same code in the public void GivenFieldsOnView(string s) method ourselves. Well you could ofcourse do that, but now you’ve got a generic sort of method for making such transformations for you.

What I really wanted to try to do is use a single step to handle this by changing the regular expression, i.e. we have one method for both situations. Sadly I’ve not yet found a way to achieve this, but at least we can reduce the code to just handle the data as tables.

Ordering of our SpecFlow hooks

In my post Running code when a feature or scenario starts in SpecFlow I showed we can use hooks to run before the feature and scenario. However, what if we have, for example, a lot of separate scenario hooks but the order that they run in matters. Maybe we need to have the logging of the scenario title and this should run first.

The BeforeScenario has a property Order which we can assign a number like this

[BeforeScenario(Order = 1)]
public static void BeforeScenario(ScenarioContext scenarioContext)
{
  Debug.WriteLine($"Scenario starting: {scenarioContext.ScenarioInfo.Title}");
}

This will run before other scenarios, including those with no Order property.

Beware, if you set the [AfterScenario(Order = 1)] it would also be run first. Which you might not want in a logging situation, then (the only solutions I’ve found thuis far is) you’ll have to actually have an Order property in all AfterScenario attributes, i.e. explicitly state the order or all such hooks.

Generic steps using regular expressions within SpecFlow

The use case I have is as follows.

Using SpecFlow for running UI automation tests, we have lots of views off of the main application Window, we might write out scenarios specific to each view, such as

Given Trader view, set fields
Given Middle Office view, set fields
Given Sales view, set fields

and these in turn are written as the following steps

[Given(@"Trader view, set fields")]
public void GivenTraderViewSetFields()
{
  // set the fields on the trader view
}

[Given(@"Middle Office view, set fields")]
public void GivenMiddleOfficeViewSetFields()
{
  // set the fields on the middle office view
}

[Given(@"Sales view, set fields")]
public void GivenSalesViewSetFields()
{
  // set the fields on the sales view
}

obviously this fine, but if all our views had the same automation steps to set fields, then the code within will be almost exactly the same, so we might prefer to rewrite the step code to be more generic

[Given(@"(.*) view, set fields")]
public void GivenViewSetFields(string viewName)
{
  // find the view and set the fields using same automation steps
}

This is great, we’ve reduced our step code, but the (.*) accepts any value which means that if we have a view which doesn’t support the same steps to set fields, then this might confuse the person writing the test code. So we can change the (.*) to restrict the view names like this

[Given(@"(Trader|Middle Office|Sales) view, set fields")]
public void GivenViewSetFields(string viewName)
{
  // find the view and set the fields using same automation steps
}

Now if you add a new view like the step below, your SpecFlow plugin will highlight it as not having a matching step and if you run the test you’ll get the “No matching step definition found for one or more steps.” error.

Given Admin view, set fields

We can ofcourse write a step like the following code, and now the test works

[Given(@"Admin view, set fields")]
public void GivenAdminViewSetFields()
{
}

But this looks different in the code highlighting via the SpecFlow extension to our IDE but also, what if Admin and Settings views both can use the same automation steps, then we’re again back to creating steps per view.

Yes, we could reuse the actual UI automation code, but I want to reduce the number of steps also to a minimum. SpecFlow allows, what we might think of as regular expression overrides, so let’s change the above to now look like this

[Given(@"(Admin|Settings) view, set fields")]
public void GivenAdminViewSetFields(string viewName)
{
}

Obviously we cannot have the same method name with the same arguments in the same class, but from the Feature/Scenario design perspective it now appears that we’re writing steps for the same method whereas in fact each step is routed to the method that understands how to automate that specific view.

This form of regular expression override also means we might have the method for Trader, Middle Office and Sales in one step definition file and the Admin, Settings method in another step definition file making the separation more obvious (and ofcourse allowing us to then use the same method name).

What’s also very cool about using this type of expression is the the SpecFlow IDE plugins, will show via autocomplete, that you have a “Admin view, set fields”, “Trader view, set fields” etc. steps .