Category Archives: Programming

Generic steps using regular expressions within SpecFlow

The use case I have is as follows.

Using SpecFlow for running UI automation tests, we have lots of views off of the main application Window, we might write out scenarios specific to each view, such as

Given Trader view, set fields
Given Middle Office view, set fields
Given Sales view, set fields

and these in turn are written as the following steps

[Given(@"Trader view, set fields")]
public void GivenTraderViewSetFields()
{
  // set the fields on the trader view
}

[Given(@"Middle Office view, set fields")]
public void GivenMiddleOfficeViewSetFields()
{
  // set the fields on the middle office view
}

[Given(@"Sales view, set fields")]
public void GivenSalesViewSetFields()
{
  // set the fields on the sales view
}

obviously this fine, but if all our views had the same automation steps to set fields, then the code within will be almost exactly the same, so we might prefer to rewrite the step code to be more generic

[Given(@"(.*) view, set fields")]
public void GivenViewSetFields(string viewName)
{
  // find the view and set the fields using same automation steps
}

This is great, we’ve reduced our step code, but the (.*) accepts any value which means that if we have a view which doesn’t support the same steps to set fields, then this might confuse the person writing the test code. So we can change the (.*) to restrict the view names like this

[Given(@"(Trader|Middle Office|Sales) view, set fields")]
public void GivenViewSetFields(string viewName)
{
  // find the view and set the fields using same automation steps
}

Now if you add a new view like the step below, your SpecFlow plugin will highlight it as not having a matching step and if you run the test you’ll get the “No matching step definition found for one or more steps.” error.

Given Admin view, set fields

We can ofcourse write a step like the following code, and now the test works

[Given(@"Admin view, set fields")]
public void GivenAdminViewSetFields()
{
}

But this looks different in the code highlighting via the SpecFlow extension to our IDE but also, what if Admin and Settings views both can use the same automation steps, then we’re again back to creating steps per view.

Yes, we could reuse the actual UI automation code, but I want to reduce the number of steps also to a minimum. SpecFlow allows, what we might think of as regular expression overrides, so let’s change the above to now look like this

[Given(@"(Admin|Settings) view, set fields")]
public void GivenAdminViewSetFields(string viewName)
{
}

Obviously we cannot have the same method name with the same arguments in the same class, but from the Feature/Scenario design perspective it now appears that we’re writing steps for the same method whereas in fact each step is routed to the method that understands how to automate that specific view.

This form of regular expression override also means we might have the method for Trader, Middle Office and Sales in one step definition file and the Admin, Settings method in another step definition file making the separation more obvious (and ofcourse allowing us to then use the same method name).

What’s also very cool about using this type of expression is the the SpecFlow IDE plugins, will show via autocomplete, that you have a “Admin view, set fields”, “Trader view, set fields” etc. steps .

Transforming inputs in SpecFlow

One really useful option in SpecFlow, which maybe most people will never need, is the ability to intercept input and potentially transform it. So the example I have is, we want to include tokens wrapped in { }. These braces will include a token which can then be replaced by a “real” value. Let’s assume {configuration.XXX} means get the value XXX from the configuration file (App.config).

With this use case in mind, let’s assume we have a feature that looks like this

Feature: Transformer Tests

  Scenario: Transform a number
    Given The value {configuration.someValue1}

  Scenario: Transform a table
    Given The values 
    | One                        | Two                        |
    | {configuration.someValue1} | {configuration.someValue2} |

We’re going to hard code our transformer to simple change configuration.someValue1 to Value1 and configuration.someValue2 to Value2, but ofcourse the real data could be read from the App.Config or generated in some more useful way.

Our step file for the above feature file, might look like this

[Given(@"The value (.*)")]
public void GivenTheValue(string s)
{
  s.Should().Be("Value1");
}

[Given(@"The values")]
public void GivenTheValues(Table table)
{
  foreach (var row in table.Rows)
  {
    var i = 1;
    foreach (var rowValue in row.Values)
    {
      rowValue.Should().Be($"Value{i}");
      i++;
    }
  }
}

We create a class, usually stored in the Hooks folder which looks like the following, where we include methods with the StepArgumentTransformation attribute, meaning these methods are call for the given type to do something with the step argument.

[Binding]
public class StepTransformer
{
  private string SimpleTransformer(string s)
  {
    switch (s)
    {
      case "{configuration.someValue1}":
        return "Value1";
      case "{configuration.someValue2}":
        return "Value2";
      default:
        return s;
    }
  }
        
  [StepArgumentTransformation]
  public string Transform(string input)
  {
    return SimpleTransformer(input);
  }
        
  [StepArgumentTransformation]
  public Table Transform(Table input)
  {
    foreach (var row in input.Rows)
    {
      foreach (var kv in row)
      {
        row[kv.Key] = SimpleTransformer(kv.Value);
      }
    }
    return input;
  }
}

This is a very simple example of such code, what we’d do in a real world is create some more advanced transformer code and inject that via the constructor but the basic Transform method would look much like they are here.

Running code when a feature or scenario starts in SpecFlow

One of the useful capabilities within SpecFlow is for us to define hooks which starts when the feature or scenario starts and ends.

An obvious use of these is to log information about the feature or scenario – we’ll just use Debug.WriteLine to demonstrate this.

All we need to do is create a binding and then supply static methods marked with the BeforeFeature and AfterFeature attributes, which might look like this

[Binding]
public class FeatureHook
{
  [BeforeFeature]
  public static void BeforeFeature(FeatureContext featureContext)
  {
    Debug.WriteLine($"Feature starting: {featureContext.FeatureInfo.Title}");
  }

  [AfterFeature]
  public static void AfterFeature(FeatureContext featureContext)
  {
    Debug.WriteLine($"Feature ending: {featureContext.FeatureInfo.Title}");
  }
}

as you can probably guess, we can do the same with the BeforeScenario and AfterScenario attributes like this

[Binding]
public class ScenarioHook
{
  [BeforeScenario]
  public static void BeforeScenario(ScenarioContext scenarioContext)
  {
    Debug.WriteLine($"Scenario starting: {scenarioContext.ScenarioInfo.Title}");
  }

  [AfterScenario]
  public static void AfterScenario(ScenarioContext scenarioContext)
  {
    Debug.WriteLine($"Scenario ending: {scenarioContext.ScenarioInfo.Title}");
  }
}

Creating a SpecFlow code generation plugin

Over the last few months I’ve been helping develop an Appium/WinAppDriver framework (or library if you prefer) to allow one of the application to look to move one of my client’s application away from CodedUI (support is being phased out for this).

One of the problem I had is, I wanted to be able to put text into the Windows Clipboard and paste it into an various edit controls. The pasting made things slightly quicker but also, more importantly, bypassed some control’s autocomplete which occasionally messes up using SendKeys.

The problem is that the clipboard uses OLE and NUnit complains that it needs to be running in an STA.

To fix this, SpecFlow classes are partial so we can create new classes within another file with the same classname (and mark as partial) and namespace and then add the NUnit require attribute as below

[NUnit.Framework.Apartment(System.Threading.ApartmentState.STA)]
public partial class CalculatorFeature
{
}

This is all well and good. It’s hardly a big deal, but we have hundreds of classes and it’s just more hassle, remembering each time to do this, or you run the tests and when it hits a Paste from the Clipboard, it fails, wasting a lot of time before you’re reminded you need to create the partial class.

SpecFlow has the ability to intercept parameters passed into our steps, it has tags and hooks which allow you to customize behaviour and it also comes with a way to interact with the code generation process which we can use to solve this issue and generate the required attributes in the generated code.

The easiest way to get started is follow the steps below, although if you prefer, the code I’m presenting here is available on my PutridParrot.SpecFlow.NUnitSta GitHub repo., so you can bypass this post and go straight the code for this article if you prefer.

  • Grab the zip or clone the repo to get the following GenerateOnlyPlugin
  • You may want to update SampleGeneratorPlugin.csproj to
    <TargetFrameworks>net472;netcoreapp3.1</TargetFrameworks>
    

    I would keep to these two frameworks to begin with, as we have a few other places to change and it’s better to get something working before we start messing with the rest of the build properties etc.

  • In the build folder edit the .targets file to reflect the Core and !Core frameworks, i.e.
    <_SampleGeneratorPluginFramework Condition=" '$(MSBuildRuntimeType)' == 'Core'">netcoreapp3.1</_SampleGeneratorPluginFramework>
    <_SampleGeneratorPluginFramework Condition=" '$(MSBuildRuntimeType)' != 'Core'">net472</_SampleGeneratorPluginFramework>
    

    Again I’ve left _SampleGeneratorPluginFramework for now as I just want to get things working.

  • Next go to the .nuspec file and change the file location at the bottom of this file, i.e.
    <file src="bin\$config$\net472\PutridParrot.SpecFlow.NUnitSta.*" target="build\net472"/>
    <file src="bin\$config$\netcoreapp3.1\PutridParrot.SpecFlow.NUnitSta.dll" target="build\netcoreapp3.1"/>
    <file src="bin\$config$\netcoreapp3.1\PutridParrot.SpecFlow.NUnitSta.pdb" target="build\netcoreapp3.1"/>
    
  • Now go and change the name of the solution, the project names and the assembly namespace if you’d like to. Ofcourse this also means changing the PutridParrot.SpecFlow.NUnitSta in the above to your assembly name.

At this point you have a SpecFlow plugin which does nothing, but builds to a NuGet package and in your application you can set up your nuget.config with a local package pointing to the build from this plugin. Something like

<packageSources>
  <add key="nuget.org" value="https://api.nuget.org/v3/index.json" protocolVersion="3" />
  <add key="Local" value="F:\Dev\PutridParrot.SpecFlow.NUnitSta\PutridParrot.SpecFlow.NUnitSta\bin\Debug"/>
</packageSources>

Obviously change the Local package repository value to the location of your package builds.

Now if you like, you can create a default SpecFlow library using the default SpecFlow templates. Ensure the SpecFlow.NUnit version is compatible with your plugin (so you may need to update the package from the template one’s). Finally just add the package you built and then build your SpecFlow test library.

If all went well, nothing will have happened, or more importantly, no errors will be displayed.

Back to our Plugin, thankfully I found some code to demonstrated adding global:: to the NUnit attributes on GitHub, so my thanks to joebuschmann.

  • Create yourself a class, mine’s NUnit3StaGeneratorProvider
  • This should implement the interface IUnitTestGeneratorProvider
  • The constructor should look like this (along with the readonly field)
    private readonly IUnitTestGeneratorProvider _unitTestGeneratorProvider;
    
    public NUnit3StaGeneratorProvider(CodeDomHelper codeDomHelper)
    {
       _unitTestGeneratorProvider = new NUnit3TestGeneratorProvider(codeDomHelper);
    }
    
  • We’re only interested in the SetTestClass, which looks like this
    public void SetTestClass(TestClassGenerationContext generationContext, string featureTitle, string featureDescription)
    {
       _unitTestGeneratorProvider.SetTestClass(generationContext, featureTitle, featureDescription);
    
       var codeFieldReference = new CodeFieldReferenceExpression(
          new CodeTypeReferenceExpression(typeof(ApartmentState)), "STA");
    
       var codeAttributeDeclaration =
          new CodeAttributeDeclaration("NUnit.Framework.Apartment", new CodeAttributeArgument(codeFieldReference));
             generationContext.TestClass.CustomAttributes.Add(codeAttributeDeclaration);
    }
    

    All other methods will just have calls to the _unitTestGeneratorProvider method that matches their method name.

  • We cannot actually use this until we register out new class with the generator plugins, so in your Plugin class, mine’s NUnitStaPlugin, change the Initialize to look like this
    public void Initialize(GeneratorPluginEvents generatorPluginEvents, 
       GeneratorPluginParameters generatorPluginParameters,
       UnitTestProviderConfiguration unitTestProviderConfiguration)
    {
       generatorPluginEvents.CustomizeDependencies += (sender, args) =>
       {
          args.ObjectContainer
             .RegisterTypeAs<NUnit3StaGeneratorProvider, IUnitTestGeneratorProvider>();
       };
    }
    
  • If you changed the name of your plugin class ensure the assembly:GeneratorPlugin reflects this new name, the solution will fail to build if you haven’t updated this anyway.

Once all these things are completed, you can build your NuGet package again. It might be worth incrementing the version and then update in your SpecFlow test class library. Rebuild that and the generated .feature.cs file should have classes with the [NUnit.Framework.Apartment(System.Threading.ApartmentState.STA)] attribute now. No more writing partial classes by hand.

This plugin is very simple, all we’re really doing is creating a really minimal implementation of IUnitTestGeneratorProvider which just adds an attribute to a TestClass and we registered this with the GeneratorPluginEvents, there’s a lot more why could potentially do.

Whilst this was very simple in the end. The main issues I had previously with this are that we need to either copy/clone or handle these build props and targets as well as use .nuspec to get our project packaged in a way that works with SpecFlow.

Unit testing your MAUI project

Note: I found this didn’t work correctly on Visual Studio for Mac, I’ll update the post further if I do get it working.

This post is pretty much a duplicate of Adding xUnit Test to your .NET MAUI Project but just simplified for me to quickly repeat the steps which allow me to write unit tests against my MAUI project code.

So, you want to unit test some code in your MAUI project. It’s not quite as simple as just creating a test project then referencing the MAUI project. Here are the steps to create an NUnit test project with .NET 7 (as my MAUI project has been updated to .NET 7).

  • Add a new NUnit Test Project to the solution via right mouse click on the solution, Add | Project and select NUnit Test Project
  • Open the MAUI project (csproj) file and prepend the net7.0 to the TargetFrameworks so it looks like this
    <TargetFrameworks>net7.0;net7.0-android;net7.0-ios;net7.0-maccatalyst</TargetFrameworks>
    
  • Replace <OutputType>Exe</OutputType> with the following
    <OutputType Condition="'$(TargetFramework)' != 'net7.0'">Exe</OutputType>
    

    Yes, this is TargetFramework singular. You may need to reload the project.

  • Now in your unit test project you can reference this MAUI project and write your tests

So, why did we carry out these steps?

Our test project was targeting .NET 7.0 but our MAUI project was targeting different platform implementations of the .NET 7 frameworks, i.e those for Android etc. We need the MAUI project to build a .NET 7.0 compatible version hence added the net7.0 to the TargetFrameworks.

The change to add the OutputType ensures that we only build an EXE output for those other frameworks, and therefore for .NET 7.0 we’ll have a DLL to reference instead in our tests.

Now we can build and run our unit tests.

A few tips and tricks for using Infragistics XamDataGrid

Note: This post was written a while back but sat in draft. I’ve published this now, but I’m not sure it’s relevant to the latest versions etc. so please bear this in mind.

I’m working on a project using the XamDataGrid. I’ve used the UltraWinGrid by Infragistics in the past but that doesn’t help at all when moving code from Windows Formds to WPF. So here’s a list of a few commonly required features and how to do them using the XamDataGrid.

Note: this post only refers to using version 12.2

Grid lines

Problem: By default there are no grid lines on the XamDataGrid.

Solution: In your ResourceDictionary (within Generic.xaml or wherever you prefer) add the following

<Style TargetType="{x:Type igDP:CellValuePresenter}">
   <Setter Property="BorderThickness" Value="0,0,1,1" />
   <Setter Property="BorderBrush" Value="{x:Static SystemColors.ControlLightBrush}" />
</Style>

Obviously replace the BorderBrush colour with your preferred colour.

Remove the group by area

Problem: I want to remove the group by section from the XamDataGrid.

Solution: In your ResourceDictionary (within Generic.xaml or wherever you prefer) add the following

<Style x:Key="IceGrid" TargetType="igDP:XamDataGrid">
   <Setter Property="GroupByAreaLocation" Value="None" />
</Style>

don’t forget to apply the style to your grid, i.e.

<dp:XamDataGrid Style="{StaticResource IceGrid}" DataSource="{Binding Details}">
   <!- Your grid code -->
</dp:XamDataGrid>

Column formatting

Problem: We want to change the numerical formatting for a column

Solution: We can set the EditorStyle for a field (editor doesn’t mean it will make the field editable)

<dp:XamDataGrid.FieldLayouts>
   <dp:FieldLayout>
      <dp:FieldLayout.Fields>
         <dp:Field Name="fee" Label="Fee" Width="80">
            <dp:Field.Settings>
               <dp:FieldSettings>
                  <dp:FieldSettings.EditorStyle>
                     <Style TargetType="{x:Type editors:XamNumericEditor}">
                        <Setter Property="Format" Value="0.####" />
                     </Style>
                  </dp:FieldSettings.EditorStyle>
               </dp:FieldSettings>
           </dp:Field.Settings>
        </dp:Field>         
      </dp:FieldLayout.Fields>
   </dp:FieldLayout>
</dp:XamDataGrid.FieldLayouts>

This code creates a field named fee and with the label Fee and the editor is set to only display decimal places if they actually exist.

As we’re defining the fields you’ll need to turn off auto generation of fields, as per

<dp:XamDataGrid.FieldLayoutSettings>
   <dp:FieldLayoutSettings AutoGenerateFields="False" />
</dp:XamDataGrid.FieldLayoutSettings>

First Chance Exceptions can be more important than you might think

Note: This post was written a while back but sat in draft. I’ve published this now, but I’m not sure it’s relevant to the latest versions etc. so please bear this in mind.

You’ll often see First Chance exceptions in the output window of Visual Studio without them actually causing any problems with your application, so what are they ?

A first chance exception is usually associated with debugging an application using Visual Studio, but they can occur at runtime in release builds as well.

A first chance exception occurs whenever an exception occurs. The exception is thrown at which point .NET searches for a handler (a catch) up the call stack. If a catch is found then .NET passes control to the catch handler code. So in this instance, let’s say we see a first chance exception which is caught in some library code. Then, whilst debugging, we’ll see the message that a first chance exception occurred, but it won’t cause an exception to halt an application because it’s caught. On the other hand if no catch is found the exception becomes a second chance exception within the debugging environment and this causes the debugger to break (if it’s configured to break on exceptions).

We can watch for first chance exceptions by adding an event handler to the AppDomain.CurrentDomain.FirstChanceException, for example

AppDomain.CurrentDomain.FirstChanceException += CurrentDomain_FirstChanceException;

private void CurrentDomain_FirstChanceException(
    object sender, FirstChanceExceptionEventArgs e)
{
   // maybe we'll log e.Exception
}

So first chance exceptions don’t tend to mean there’s problems within your code however using async/await, especially with void async methods you may find first chance exceptions are the a precursor for an unhandled exception from one of these methods.

If you check out Jon Skeet’s answer on stackoverflow to Async/await exception and Visual Studio 2013 debug output behavior he succinctly describes the problems that can occur with exception handling in async/await code.

To use this great quote “When an exception occurs in an async method, it doesn’t just propagate up the stack like it does in synchronous code. Heck, the logical stack is likely not to be there any more.”. With the async/await on a Task, the task itself contains any exceptions, but when there is no task, when we’re async/await on a void method, then there’s no means to propagate any exceptions. Instead these exceptions are more likely to first appear as First Chance exceptions and then unhandled exceptions.

See also https://msdn.microsoft.com/en-us/magazine/jj991977.aspx

Handling “unhandled” exceptions in WPF

Note: This post was written a while back but sat in draft. I’ve published this now, but I’m not sure it’s relevant to the latest versions etc. so please bear this in mind.

None of us want our applications to just crash when an exception occurs so we often ensure we’ve got catch blocks around possible exceptions, but ofcourse, sometimes we either don’t care to handle an exception explicitly or we forget to code the catch block or the place the exception occurs such that it’s not possible to handle in such a structured way. In such scenarios we want to handle all “unhandled” exceptions at the application level.

Let’s take a look at some of the ways to handle exceptions in a WPF application. Here’s a list some of those ways to handle “unhandled” exceptions.

AppDomain.UnhandledException

The AppDomain.UnhandledException or more specifically AppDomain.CurrentDomain.UnhandledException.

Application.DispatcherUnhandledException

The Application.DispatcherUnhandledException or more specifically the Application.Current.DispatcherUnhandledException

Dispatcher.UnhandledException

The Dispatcher.UnhandledException or more specifically Dispatcher.CurrentDispatcher.UnhandledException

AppDomain.FirstChanceException

The AppDomain.FirstChanceException or more specifically AppDomain.CurrentDomain.FirstChanceException.

TaskScheduler.UnobservedTaskException

The TaskScheduler.UnobservedTaskException

C# 8.0 nullable and non-nullable reference types

Note: This post was written a while back but sat in draft. I’ve published this now, but I’m not sure it’s relevant to the latest versions etc. so please bear this in mind.

One of the key C# 8.0 features is nullable/non-nullable reference types, but before we get started you’ll need to enable the language features by editing the csproj and adding the following to each PropertyGroup

<PropertyGroup>
    <LangVersion>8.0</LangVersion>
    <Nullable>enable</Nullable>
</PropertyGroup>

You can also enable/disable on a per file basis using

#nullable enable

and to disable

#nullable disable

What’s it all about?

So we’ve always been able to assign a null to a reference type (which is also the default value when not initialized), but ofcourse this means if we try to call a method on a null reference we’ll get the good old NullReferenceException. So for example the following (without #nullable enable) will compile quite happily without warnings (although you will see warnings in the Debug output)

string s = null;
var l = s.Length;

Now if we add #nullable enable we’ll get the warnings about that we’re attempting to assign a null to a non-nullable. Just like using the ? as a nullable for primitives, for example int? we now mark our reference types with the ?, hence the code now looks like this

string? s = null;
var l = s.Length;

In other words we’re saying we expect that the string might be a null. The use of the non-nullable on reference types will hopefully highlight possible issues that may result in NullReferenceExceptions, but as they’re currently warnings you’ll probably want to enable Warnings as Errors.

This is an opt-in feature for obvious reasons i.e. it can have a major impact upon existing projects.

Obviously you still need to handle possible null values. Partly because you might be working with libraries which do not have this nullable reference type option enabled, but also because we can “trick” the compiler – so we know the previously listed code will result in a Dereference of a possibly null reference warning, but what if we change things to the following

public static string GetValue(string s)
{
   return s;
}

// original code changed to
string s = GetValue(null);
var l = s.Length;

This still gives us a warning Cannot convert null literal to non-nullable reference type so that’s good but we can change GetValue to this

public static string GetValue([AllowNull] string s)
{
   return s;
}

and now, no warnings exist – the point being that even with nullable reference types enabled and not marking a reference type as nullable, we can still get null reference exceptions.

Attributes

As you’ve seen, there’s also (available in .NET core 3.0) some attributes that we can apply to our code to the compiler a little more information about our null expectations. You’ll need to use the following

using System.Diagnostics.CodeAnalysis;

See Update libraries to use nullable reference types and communicate nullable rules to callers for a full list of attributes etc.

Alignment using string interpolation in C#

C#’s string interpolation supports alignment and format options

Alignment

Using the following syntax, {expression, alignment}, we can use negative alignment numbers for left-aligned expressions and positive numbers for right-aligned alignment. For example

var number = 123;
Console.WriteLine($"|{number,-10}|");
Console.WriteLine($"|{number,10}|");

This would produce the following

|123       |
|       123|

See also Alignment component.

Format String

We can use the following {expression, alignment:format} or {expression:format} syntax to add formatting, i.e.

var number = 123;
Console.WriteLine($"|{number, -10:F3}|");
Console.WriteLine($"|{number, 10:F3}|");

would produce the following

|123.000   |
|   123.000|

and

var number = 123;
Console.WriteLine($"|{number:F3}|");
Console.WriteLine($"|{number:F3}|");

would produce

|123.000|
|123.000|

See also Format string component.