The Gherkin language

Gherkin is a DSL used within BDD development. It’s used along with Cucumber which processes the DSL or in the case of .NET we can use tools such as SpecFlow (which I posted about a long time back, see Starting out with SpecFlow) to help generate files and code.

Gherkin allows us to create the equivalent of use cases in a human readable form, using a simple set of keywords and syntax which can then be use to generate a series of method calls to undertake some action and assertion.

Getting started

We can use a standard text editor to create Gherkin’s feature files, but if you prefer syntax highlighting and intellisense (although this language we’re using is pretty simple) then install SpecFlow into Visual Studio or add a Gherkin syntax highlighter into VSCode (for example) or just use your preferred text editor.

I’m going to create a feature file (with the .feature extension) using the SpecFlow item template, so we have a starting point which we can then work through. Here’s the file generated code

Feature: Add a project
	In order to avoid silly mistakes
	As a math idiot
	I want to be told the sum of two numbers

@mytag
Scenario: Add two numbers
	Given I have entered 50 into the calculator
	And I have entered 70 into the calculator
	When I press add
	Then the result should be 120 on the screen

What we have here is a Feature which is meant to describe a single piece of functionality within an application. The feature name should be on the same line as the Feature: keyword.

In this example, we’ve defined a feature which indicates our application will have some way to add a project within our application. We can now add an optional description, which is exactly what the SpecFlow template did for us. The Description may span multiple lines (as can be seen above with the lines under the Feature line being the description) and should be a brief explanation of the specific feature or use case. Whilst the aim is to be brief, it should include acceptance criteria and any relevant information such as user permissions, roles or rules around the feature.

Let’s change our feature text to be a little meaningful to our use case.

Feature: Add a project

	Any user should be able to create/add a new project
	as long as the following rules are met

	1. No duplicate project names can exist
	2. No empty project names should be allowed

I’m sure with a little thought I can come up with more rules, but you get the idea.

The description of the feature is a useful piece of documentation and can be used as a specification/acceptance criteria.

Looking at the generated feature code, we can see that SpecFlow also added @mytag. Tags allow us to group scenarios. In terms of our end tests, this can be seen as a way of grouping features, scenarios etc. Multiple tags may be applied to a single feature or scenario etc. for example

@project @mvp
Feature: Add a project

I don’t need any tags for the feature I’m implementing here, so I’ll delete that line of code.

The Scenario is where we define each specific scenario of a feature and the steps to be taken/expected. The Scenario takes a similar form to a Feature, i.e. Scenario: and then a description of the context.

Following the Scenario line, we then begin to define the steps that make up our scenario, using the keywords Given, When, Then, And and But.

In unit testing usage, we can view Given as a precondition or setup. When, And and But as actions (where But is seen as a negation) and this leaves Then as an assertion.

Let’s just change to use some of these steps in a more meaningful manner within the context of our Scenario

Scenario: Add a project to the project list
	Given I have added a valid project name
	When I press the OK button
	Then the list of projects should now include my newly added project

Eventually, if we generate code from this feature file, each of these steps would get turned into a set of methods which could be used as follows

  • Given becomes an action to initialize the code to the expected context
  • When becomes an action to set-up any variables, etc.
  • Then becomes an assertion or validate any expectations

Multiple When‘s can be defined using the And keyword, for example imagine our Scenario looked like this

Scenario: Add a project to the project list
	Given I have added a valid project name
	When I press the OK button
	And I checked the Allow checkbox
	Then the list of projects should now include my newly added project

Now in addition to the When I press the OK button step I would also get another When created, as the And keyword simply becomes another When action. In essence the And duplicates the previous keyword.

We can also include the But keyword. As we’ve seen And, this is really another way of defining a additional When steps but in a more human readable way, But works in the same way as the And keyword by simply creating another When step in generated code, however But should be viewed as a negation step, for example

Scenario: Add a project to the project list
	Given I have added a valid project name
	When I press the OK button
	But the Do Not Allow checkbox is unchecked
	Then the list of projects should now include my newly added project

Finally, as stated earlier, Then can be viewed as a place to write our assertions or simply check if the results matches our expectations. We can again use And after the Then to create multiple then steps and thus assert multiple expectations, for example

Scenario: Add a project to the project list
	Given I have added a valid project name
	When I press the OK button
	But the Do Not Allow checkbox is unchecked
	Then the list of projects should now include my newly added project
        And the list of project should increase by 1

More keywords

In the previous section we covered the core keywords of Gherkin for defining our features and scenarios. But Gherkin also includes the following

Background, Scenario Outline and Examples.

The Background keyword is used to define reusable Given steps, i.e. if all our scenarios end up requiring the application to be in edit mode we might declare a background before any scenarios, such as this

Background:
	Given the projects list is in edit mode
	And the user clicks the Add button

we’ve now created a sort of, top level scenario which is run before each Scenario.

The Scenario Outline keywords allow us to define a sort of scenario function or template. So, if we have multiple scenarios which only differ in terms of data being used, then we can create a Scenario Outline and replace the specific data points with variables.

For example let’s assume we have scenarios which actually define multiple project names to fulfil the feature’s two rules (we outlined in the feature). Let’s assume we always have a project named “Default” within the application and therefore we cannot duplicate this project name. We also cannot enter a “” project name.

If we write these as two scenarios, then we might end up with the following

Scenario: Add a project to the project list with an empty name
	Given the project name ""
	When I press the OK button
	Then the project should not be added

Scenario: Add a project to the project list with a duplicate name
	Given the project name "Default"
	When I press the OK button
	Then the project should not be added

If we include values within quotation marks or include numbers within our steps, then these will become arguments to the methods generated for these steps. This obviously offers us a way to reuse such steps or use example data etc.

Using a Scenario Outline these could be instead defined as

Scenario Outline: Add a project to the project list with an invalid name
	Given the project name <project-name>
	When I press the OK button
	Then the project should not be added

	Examples: 
	| project-name |
	| ""           |
	| "Default"    |

The <> acts as placeholders and the string within can be viewed as a variable name. We then define Examples which becomes our data inputs to the scenario.

Gherkin also includes the # for use to start a new line and mark it as a comment, multiple lines may be commented out using triple quotation marks, such as “””. Here’s a example of usage

# This is a comment
	
Scenario Outline: Add a project to the project list  with an invalid name
	Given the project name <project-name>
	"""
	Given the project name <project-id>
	"""
	When I press the OK button
	Then the project should not be added

	Examples: 
	| project-name |
	| ""           |
	| "Default"    |

It should be noted that after listing these different keywords and their uses you can also create a scenario that’s a Given followed by a Then, in other words a setup step followed by an assertion, if this is all you need.

For example

Scenario: Add a project
	Given A valid project name
        Then the project list should increase by 1

SpecFlow specifics

On top of the standard Gherkin keywords, SpecFlow adds a few bits.

The @ignore tag is used by SpecFlow to generate ignored test methods.

SpecFlow has also add Scenario Template as a synonym to Scenario Outline. Like wise Scenarios is a alternate to Examples.

Code generation

We’re not going to delve into Cucumber or the SpecFlow generated code except to point out that if you define more than one step within a scenario with the same text, this will generate a call to the same method in code. So whilst you might read a scenario as if it’s a new context or the likes, ultimately the code generated will execute the same method.

References

Gherkin Reference
Using Gherkin Language In SpecFlow

Creating a pre-commit hook for TortoiseSvn

Occasionally we will either have files in SVN we don’t want updated or files we don’t want committed to source control. In the latter case you might ignore those files but equally they might be part of a csproj (for example) but you don’t want them updated after the initial check in.

A great example of this is a configuration file, checked in initially with a placeholder for a private key. Thus on your CI/build box this file is included in the build but the private key is kept out of the repo.

I’m using TortoiseSvn and whilst this may well be a standard feature of SVN I’ll be discussing it from the TortoiseSvn point of view.

A pre-commit hook can be set-up on your local machine. Obviously in some cases it’d be preferable to have the server handle such tasks, but in situations where that’s not possible we can create our own code and hook into TortoiseSvn’s pre-commit hook.

We’re going to write this in .NET (although it’s easy to implement in any language) as ultimately we’ll just be creating a pretty standard console application.

Configuring the hook in TortoiseSVN

In the Settings | Hook Scripts section we can add a hook, select Pre-Commit Hook from the Hook Type dropdown and the Working Copy Path is set to our local copy of the repository this hook should apply to. Hence this is not a hook against all projects’s or repositories you might have checked out but is on a per checkout basis. Next up, the Command Line To Execute should be the full path (including the EXE or script) to the application which should be executed prior to a commit.

Writing our pre-commit application

We can write the application in any language or scripting if we are able to run the interpreter for the script (for example the TortoiseSVN dialog settings in the previously listed link, show WScript running a JavaScript file).

For our purposes we’re going to create a Console application.

When executed, our application will be passed arguments (via the Main method’s arguments). There are four arguments sent to your application in the following order

  • The path and filename of a .tmp file which contains a newline delimited list of the files to be committed
  • The depth of commit/update
  • The path and filename of a .tmp file which contains the message that is to be saved as part of the commit
  • The current working directory

Note: different hooks send different arguments, see https://tortoisesvn.net/docs/release/TortoiseSVN_en/tsvn-dug-settings.html but listed here also for completness

  • Start-commit
    Arguments – PATH, MESSAGEFILE, CWD
  • Manual Pre-commit
    Arguments – PATH, MESSAGEFILE, CWD
  • Pre-commit
    Arguments – PATH DEPTH MESSAGEFILE CWD
  • Post-commit
    Arguments – PATH, DEPTH, MESSAGEFILE, REVISION, ERROR, CWD
  • Start-update
    Arguments – PATH, CWD
  • Pre-update
    Arguments – PATH, DEPTH, REVISION, CWD
  • Post-update
    Arguments – PATH, DEPTH, REVISION, ERROR, CWD, RESULTPATH
  • Pre-connect
    Arguments – no parameters are passed to this script. You can pass a custom parameter by appending it to the script path.

The meaning of each argument is as follows

  • PATH
    A path to a temporary file which contains all the paths for which the operation was started. Each path is on a separate line in the temp file.

    Note that for operations done remotely, e.g. in the repository browser, those paths are not local paths but the urls of the affected items.

  • DEPTH
    The depth with which the commit/update is done.

    Possible values are:

    • -2 svn_depth_unknown
    • -1 svn_depth_exclude
    • 0 svn_depth_empty
    • 1 svn_depth_files
    • 2 svn_depth_immediates
    • 3 svn_depth_infinity
  • MESSAGEFILE
    Path to a file containing the log message for the commit. The file contains the text in UTF-8 encoding. After successful execution of the start-commit hook, the log message is read back, giving the hook a chance to modify it.
  • REVISION
    The repository revision to which the update should be done or after a commit completes.
  • ERROR
    Path to a file containing the error message. If there was no error, the file will be empty.
  • CWD
    The current working directory with which the script is run. This is set to the common root directory of all affected paths.
  • RESULTPATH
    A path to a temporary file which contains all the paths which were somehow touched by the operation. Each path is on a separate line in the temp file.

Your application, or script should return 0 for success anything other than 0 for failure. We can also write to the console error stream to return a message for TortoiseSVN to display.

Here’s a stupid little example which just stops any more check-ins

class Program
{
   static int Main(string[] args)
   { 
      Console.Error.WriteLine("Stop checking in");
      return 1;
     }
}

Sample

The following code stops any check-ins of a file named keys.txt that has any data within it

public static class KeysFileCheck
{
   public static int CheckFiles(string fileList)
   {
      return CheckFiles(
         File.ReadAllLines(fileList)
            .Where(path => Path.GetFileName(path) == "keys.txt")
            .ToArray());
   }

   private static int CheckFiles(string[] keyFiles)
   {
      var result = 0;
      foreach (var keyFile in keyFiles)
      {
         using (var fs = new FileStream(keyFile, FileMode.Open, FileAccess.Read))
         {
            result |= CheckKeyFile(keyFile, fs);
         }
      }
      return result;
   }

   private static int CheckKeyFile(string filename, Stream stream)
   {
      using (var sr = new StreamReader(stream))
      {
         var data = sr.ReadToEnd();
         if (!String.IsNullOrEmpty(data))
         {
            Console.Error.WriteLine(
               "The keys file {0} cannot be saved with data in it as the build server will fail to build the solution", 
               filename);
            return 1;
         }
      }

      return 0;
   }
}

Or Console application’s main method would simply be

class Program
{
   static int Main(string[] args)
   { 
      return KeysFileCheck.CheckFiles(args[0]);
   }
}

IOException – the process cannot access the file because it is used by another process

I’m using a logging library which is writing to a local log file and I also have a diagnostic tool which allows me to view the log files, but if I try to use File.Open I get the IOException,

“the process cannot access the file because it is used by another process”

this is obviously self-explanatory (and sadly not the first time I’ve had this and had to try and recall the solution).

So to save me searching for it, here the solution which allows me to open a file that’s already opened for writing to by another process

using (var stream = 
   File.Open(currentLogFile, 
      FileMode.Open, 
      FileAccess.Read, 
      FileShare.ReadWrite))
{
   // stream reading code
}

The key to the File.Open line is the FileShare.ReadWrite. We’re interested in opening the file to read but we still need to specify the share flag(s) FileShare.ReadWrite.

Replacing multiple ConfigureAwait with the SynchronizationContextRemover

First off this is a post based upon the post An alternative to ConfigureAwait(false) everywhere.

This is a neat solution using the knowledge of how async/await and continuations fit together.

The problem Ben Willi is solving is the situation where, as a writer of library code one generally wants to use ConfigureAwat(false) so that the library does not block the synchronization context that’s used by the application. Obviously this is of particular concern when the current synchronzation context is the UI thread.

The issue is that if you have many await’s each having to set the async method/task as ConfigureAwait(false) things get a little messy and require any additional changes to also ConfigureAwait(false). Plus the synchronization onto the calling thread each time mightcause lag in the application.

See Ben’s post for a full description of the problem that’s being solved as he has example code to demonstrate the pitfalls.

What I liked and it’s a reminder of how the async/await works. What we’re trying to do is remove ConfigureAwait from all the following sort of code

await RunAsync().ConfigureAwait(false);
await ProcessAsync().ConfigureAwait(false);
await CompleteAsync().ConfigureAwait(false);

If we ignore the ConfigureAwait for a moment, in pseudo code, these three calls end up like this via the compiler

RunAsync.OnComplete
(
   ProcessAsync().OnComplete
   (
      CompleteAsync().OnComplete
      (
      )
   )
)

What happens internally is that each call is wrapped in a continuation of the previous awaited call.

See also Creating awaitable types for a look at creating awaitable types.

Let’s first look at what the new code might look like, then we’ll look at how the solution works

await new SynchronizationContextRemover();

await RunAsync();
await ProcessAsync();
await CompleteAsync();

here’s the code has for the SynchronizationContextRemover

public struct SynchronizationContextRemover : INotifyCompletion
{
   public bool IsCompleted => SynchronizationContext.Current == null;

   public void OnCompleted(Action continuation)
   {
      var prev = SynchronizationContext.Current;
      try
      {
         SynchronizationContext.SetSynchronizationContext(null);
         continuation();
      }
      finally
      {
         SynchronizationContext.SetSynchronizationContext(prev);
      }
   }

   public SynchronizationContextRemover GetAwaiter()
   {
      return this;
   }

   public void GetResult()
   {            
   }
}

What the SynchronizationContextRemover does is create an awaitable type (the GetAwaiter) which is this, so an instance of itself, which then supplies the code for an awaitable type, such as IsCompleted, GetResult and OnCompleted property and methods.

With the knowledge of how the awaits turn into methods with continuations to the next method etc. this would now form the following pseudo code

SynchronizationContextRemover.OnComplete
(
   RunAsync.OnComplete
   (
      ProcessAsync().OnComplete
      (
         CompleteAsync().OnComplete
         (
         )
      )
   )
)

Hence each awaitable method is nested, ultimately, in the OnCompleted method of SynchronizationContextRemover.

The SynchronizationContextRemover OnCompleted method then switched the synchronization context to null prior to the continuations being called so that the other method calls will now also use the null synchronization context and therefore code is not marshalled back to the original synchronization context (i.e. the UI thread).

Where have you been hiding dotMemoryUnit?

As a subscriber to all the JetBrains tools, I’m surprised that I didn’t know about dotMemoryUnit until it was mentioned on a recent episode of dotnetrocks. I’ve looked into memory type unit testing in the past as it’s an area (along with performance) that I find particularly of interest as part of a CI build.

So here it is MONITOR .NET MEMORY USAGE WITH UNIT TESTS.

Prerequisites

Simply add the NuGet package JetBrains.DotMemoryUnit to your test project and you’ll get the library for writing dotMemory test code, if you want to run dotMemory unit tests from the command line (or from your build/CI server) you’ll need to download the runner from https://www.jetbrains.com/dotmemory/unit/.

To run the command line test runner you’ll need to run dotMemoryUnit.exe passing in your test runner application and the DLL to be tested through dotMemoryUnit. For example

dotMemoryUnit "C:\Program Files (x86)\NUnit 2.6.3\bin\nunit-console.exe" MyDotMemoryTests.dll

If you’re using Resharper the test runner supports dotMemory Unit, but you’ll need to select the option Run under dotMemory Unit or you’ll see the message

DotMemoryUnitException : The test was run without the support for dotMemory Unit. To safely run tests with or without (depending on your needs) the support for dotMemory Unit:

in Resharper’s test runner output window.

Getting started

The key thing to bare in mind is that dotMemoryUnit is not a unit testing framework and not an assertion framework, so you’ll still write NUnit, xUnit, MSTest etc. (unsupported test frameworks can be integrated with dotMemoryUnit, see Working with Unsupported Unit Testing Frameworks) code and you can still use the assertions with these libraries, or the likes of Should, Shouldly, FluentAssertions etc.

dotMemoryUnit simply allows us to add code to investigate memory usage, so here’s an example

[Test]
public void FirstTest()
{
   var o = new MyObject();

   dotMemory.Check(memory =>
      Assert.That(
         memory.GetObjects(
            where => where.Type.Is<MyObject>())
         .ObjectsCount, Is.EqualTo(0)));

   GC.KeepAlive(o);
}

In this example, we create an object and then use dotMemory to carry out a memory “check”. The dotMemory.Check call is a checkpoint (or snapshot) and returns a MemoryCheckpoint object which can be used to compare two checkpoints and/or we can carry out a query against the memory within the lambda in the checkpoint call.

Here, you can see, we get the memory then search of objects of type MyObject. We then assert whether this object count is zero. In the above it’s probable that the object has not been disposed of and hence this test will fail, however we have the GC.KeepAlive(o) in place to ensure that the object is not garbage collected during the test run.

See Checking for Objects

Beware that when running dotMemoryUnit, a .dmw file is created (by default) for failing tests and these are placed in your %USERPROFILE%\AppData\Local\Temp\dotMemoryUnitWorkspace folder. There is an auto delete policy when running dotMemoryUnit but these files are often MB’s in size. We can change the default locaiton on a per test basis using the DotMemoryUnitAttribute, i.e.

Note: The dmw file is viewable for the dotMemory application which is part of the JetBrains suite of applications.

[DotMemoryUnit(Directory = "c:\\Temp\\dotMemoryUnit")]
public void EnsureMemoryFreed()
{
// asserts
}

We’ve seen the use of the dotMemory.Check method. We can also use the dotMemoryApi class which gives a slightly different interface, for example the following code creates two snapshots and then checks what objects of type MyObject were created between the snapshots, for example

[Test]
public void AllocateFourMyObjects()
{
   var snap1 = dotMemoryApi.GetSnapshot();

   var f1 = new MyObject();
   var f2 = new MyObject();
   var f3 = new MyObject();
   var f4 = new MyObject();

   var snap2 = dotMemoryApi.GetSnapshot();

   Assert.That(
      snap2.GetObjects(w => w.Type.Is<MyObject>()).ObjectsCount -
      snap1.GetObjects(w => w.Type.Is<MyObject>()).ObjectsCount, 
      Is.EqualTo(4));
}

We can also use the AssertTrafficAttribute (Note: this seemed very slow via Resharper’s GUI whereas the console app was much faster). In a similar way to the above, although in this example we’re saying assert that the allocated object count of types MyObject does not exceed 4, i.e. in the previous code we states that we expect exactly 4 allocations, so be aware the difference here.

[
Test,
AssertTraffic(AllocatedObjectsCount = 4, Types = new [] { typeof(MyObject)})
]
public void AllocateFourMyObjects()
{
   var f1 = new MyObject();
   var f2 = new MyObject();
   var f3 = new MyObject();
   var f4 = new MyObject();
}

Finally, we can also use snapshot and the dotMemoryApi.GetTrafficBetween method do so something similar to the last two examples

[
Test, DotMemoryUnit(CollectAllocations = true)
]
public void AllocateFourMyObjects()
{
   var snap1 = dotMemoryApi.GetSnapshot();

   var f1 = new MyObject();
   var f2 = new MyObject();
   var f3 = new MyObject();
   var f4 = new MyObject();

   var snap2 = dotMemoryApi.GetSnapshot();

   var traffic = dotMemoryApi.GetTrafficBetween(snap1, snap2);
   var o = traffic.Where(w => w.Type.Is<MyObject>());
   Assert.AreEqual(4, o.AllocatedMemory.ObjectsCount);
}

To use memory allocations we need to set the DotMemoryUnitAttribute CollectAllocations to true. For the AssertTrafficAttribute this happens automatically.

That’s if for now. Go and experiment.

References

dotMemoryUnit Help

Guiding the conversation with the Bot framework and FormFlow

FormFlow creates and manages a “guided conversation”. It can be used to gain input from a user in a menu driven kind of way.

Note: the example on the Basic features of FormFlow page covers the basic features really well. In my post I’ll just try to break down these steps and hopefully add some useful hints/tips.

Let’s get started

Let’s implement something from scratch to gain an idea of the process of creating a form flow. Like the example supplied by Microsoft, we’ll begin by creating a set of options as enumerations and get FormFlow to create the conversation for us. We’re going to create a really simple PC building service.

First off we’re going to create the PcBuilder class and hook it into the “conversation”. Here’s the builder

[Serializable]
public class PcBuilder
{
   public static IForm<PcBuilder> BuildForm()
   {
      return new FormBuilder<PcBuilder>()
         .Message("Welcome to PC Builder")
         .Build();
   }
}

Now in the MessageController.cs we want the ActivityType.Message to be handled like this

if (activity.Type == ActivityTypes.Message)
{
   await Conversation.SendAsync(
      activity, 
      () => Chain.From(
         () => FormDialog.FromForm(PcBuilder.BuildForm)
      )
   );
}

When a message comes in to initiate a conversation (i.e. just type some text into the Bot emulator and press enter to initiate a message) the FormDialog will take control of our conversation using the PcBuilder to create an menu driven entry form.

Note: Running this code “as is” will result in a very short conversation. No options will be displayed and nothing will be captured.

In it’s basic form we can use enumerations and fields to capture information, so for example our first question to a user wanting to build a PC is “what processor do you want?”. In it’s basic form we could simply declare an enum such as

public enum Processor
{
   IntelCoreI3,
   IntelCoreI7,
   ArmRyzen3
}

and we need to not only capture this but tell FormFlow to use it. All we need to do is add a field to the PcBuilder class such as

public class PcBuilder
{
   public Processor? Processor;

   // other code
}

Now if we initiate a conversation, FormFlow takes a real good stab at displaying the processor options in a human readable way. On my emulator this displays

Welcome to PC Builder

Please select a processor
Intel Core I 3
Intel Core I 7
Arm Ryzen 3

and now FormFlow waits for me to choose an option. Once chosen (I chose Intel Core I 7) it’ll display

Is this your selection?
   Processor: Intel Core I 7

to which the response expected is Y, Yes, N or No (cases insensitive). A “no” will result in the menu being displayed again and the user can being choosing options from scratch.

The first problem I can see is that, whilst it takes a good stab at converting the enum’s into human readable strings, we know that usually Intel Core I 7 would be written as Intel Core i7 so it’d be good if we had something like the component DescriptionAttribute to apply to the enum for FormFlow to read.

Luckily they’ve thought of this already with the DescribeAttribute which allows us to override the description text, however if you change the code to

public enum Processor
{
   [Describe("Intel Core i3")]
   IntelCoreI3,
   [Describe("Intel Core i7")]
   IntelCoreI7,
   [Describe("ARM Ryzen 3")]
   ArmRyzen3
}

things will not quite work as hoped. Selecting either of the Intel options (even via the buttons in the emulator) will result in a By “Intel Core” processor did you mean message with the two Intel options, selecting either will result in “Intel Core i7” is not a processor option. What we need to do is now add options to the enum to override the “term” used for the selection, so our code now looks like this

public enum Processor
{
   [Describe("Intel Core i3")]
   [Terms("Intel Core i3", "i3")]
   IntelCoreI3,
   [Describe("Intel Core i7")]
   [Terms("Intel Core i7", "i7")]
   IntelCoreI7,
   [Describe("ARM Ryzen 3")]
   ArmRyzen3
}

Let’s move things along…

Next we want the user to choose from some predefined memory options, so again we’ll add an enum for this (I’m not going to bother adding Describe and Terms to these, just to reduce the code)

public enum Memory
{
   TwoGb, FourGb, EightGb, 
   SixtenGb, ThiryTwoGb, SixtyFourGb
}

To register these within the FormFlow conversation we add this to the PcBuilder like we did with the Processor. The order is important, place the field before Processor and this will be the first question asked, place after it and obviously the Processor will be asked about first. So we now have

public class PcBuilder
{
   public Processor? Processor;
   public Memory? Memory;

   // other code
}

So far we’ve look into single options, but what about if we have a bunch of “add-ons” to our PC builder, you might want to add speakers, keyboard, a mouse etc. We can simply add a new enum and then a field of type List<AddOn>. For example

public enum AddOns
{
   Speakers = 1, Mouse, Keyboard, MouseMat
}

Note: the 0 value enum is reserved for unknown values, so either use the above, where you specify the Speaker (in this example) as starting the enum at the value 1 or put an Unkown (or whatever name you want) as the 0 value.

Note: Also don’t use IList for your field or you’ll get find no options are displayed. Ofcourse this makes sense as the field is not a concrete type that can be created by FormFlow.Note: By default, a list of options will not include duplicates. Hence an input of 1, 1, 4 will result in the value Speakers and MouseMat (no second set of Speakers).

Dynamic fields

In our example we’ve allowed a pretty standard set of inputs, but what if the user chose an Intel Core i3 and now needed to choose a motherboard. It would not make sense to offer up i7 compatible or ARM compatible motherboards. So let’s look at how we might solve this. We’ll create a enum for motherboards, like this

public enum Motherboard
{
   I3Compatible1,
   I3Compatible2,
   I7Compatible,
   ArmCompatible
}

It’s been a while since I built my last computer, so I’ve no idea what the current list of possible motherboards might be. But this set of options should be self-explanatory.

Currently (I haven’t found an alternative for this) the way to achieve this is to take over the creation and handling of fields, for example BuildForm would now have the following code

return new FormBuilder<PcBuilder>()
   .Message("Welcome to PC Builder")
   .Field(new FieldReflector<PcBuilder>(nameof(Processor)))
   .Field(new FieldReflector<PcBuilder>(nameof(Motherboard))
      .SetType(typeof(Motherboard))
      .SetDefine((state, f) =>
      {
         const string i3Compat1 = "i3 Compatible 1";
         const string i3Compat2 = "i3 Compatible 2";
         const string i7Compat = "i7 Compatible";
         const string armCompat = "ARM Compatible";

         f.RemoveValues();
         if (state.Processor == Dialogs.Processor.IntelCoreI3)
         {
            f.AddDescription(Dialogs.Motherboard.I3Compatible1, i3Compat1);
            f.AddTerms(Dialogs.Motherboard.I3Compatible1, i3Compat1);
            f.AddDescription(Dialogs.Motherboard.I3Compatible2, i3Compat2);
            f.AddTerms(Dialogs.Motherboard.I3Compatible2, i3Compat2);
         }
         else if (state.Processor == Dialogs.Processor.IntelCoreI7)
         {
            f.AddDescription(Dialogs.Motherboard.I7Compatible, i7Compat);
            f.AddTerms(Dialogs.Motherboard.I7Compatible, i7Compat);
         }
         else if (state.Processor == Dialogs.Processor.ArmRyzen3)
         {
            f.AddDescription(Dialogs.Motherboard.ArmCompatible, armCompat);
            f.AddTerms(Dialogs.Motherboard.ArmCompatible, armCompat);
         }
         else
         {
            f.AddDescription(Dialogs.Motherboard.I3Compatible1, i3Compat1);
            f.AddTerms(Dialogs.Motherboard.I3Compatible1, i3Compat1);

            f.AddDescription(Dialogs.Motherboard.I3Compatible2, i3Compat2);
            f.AddTerms(Dialogs.Motherboard.I3Compatible2, i3Compat2);

            f.AddDescription(Dialogs.Motherboard.I7Compatible, i7Compat);
            f.AddTerms(Dialogs.Motherboard.I7Compatible, i7Compat);

            f.AddDescription(Dialogs.Motherboard.ArmCompatible, armCompat);
            f.AddTerms(Dialogs.Motherboard.ArmCompatible, armCompat);
         }
         return Task.FromResult(true);
   }))
   .OnCompletion(OnCompletion)
   .AddRemainingFields()
   .Build();

Notice once we start to supply the fields we’re pretty much taking control of the supply of and order of fields which the data entry takes.

Ofcourse the code to supply the descriptions/terms could be a lot nicer.

Customization

We can customize some of the default behaviour (as seen with Terms and Describe). We can also change the prompt for a field, for example

[Prompt("What {&} would you like? {||}")]
public List<AddOns> AddOns;

Now when this part of the conversation is reached the prompt will say “What add ons would you like?” and then list them. The {&} is replaced by the field description and {||} by the options.

We can also mark a field as Optional, so for example we don’t want to force a user to select an AddOn

[Optional]
public List<AddOns> AddOns;

Now a fifth option “No Preference” is added to our list. In other words the list will be null.

Other FormFlow attributes include Numeric (allowing us to specify restrictions on the values range input). Pattern Allows us to define RegEx to validate a string field and Template allows us to supply the template to use to generate prompts and prompt values.

How do we use our data

So we’ve gathered our data, but at the end of the conversation we need to actually do something with it, like place an order.

To achieve this we amend our BuildForm method and add a method to handle the data upon completion, i.e.

public static IForm<PcBuilder> BuildForm()
{
   return new FormBuilder<PcBuilder>()
      .Message("Welcome to PC Builder")
      .OnCompletion(OnCompletion)
      .Build();
}

private static Task OnCompletion(IDialogContext context, PcBuilder state)
{
   // the state argument includes the selected options.
   return Task.CompletedTask;
}

The Bot equivalent of message boxes

In Windows (and most user interface frameworks) we have the concept of message boxes and dialog boxes.

We’ve already seen that we would create an implementation of an IDialog to interact with user input/commands, but the Bot framework also includes the equivalent of a Yes/No message box. For example

public async Task MessageReceivedAsync(
   IDialogContext context, 
   IAwaitable<IMessageActivity> argument)
{
   PromptDialog.Confirm(
      context,
      CalculateAsync,
      "Do you want to calculate risk?",
      "Unknown option");
}

public async Task CalculateAsync(
   IDialogContext context, 
   IAwaitable<bool> argument)
{
   var confirm = await argument;
   await context.PostAsync(confirm ? "Calculated" : "Not Calculated");
   context.Wait(MessageReceivedAsync);
}

What we have now is a prompt dialog which, when seen in the Bot framework channel emulator will ask the question “Do you want to calculate risk?”. This prompt will also display two buttons Yes & No. The user can press the button or type Y, N, Yes or No. Assuming a valid response is given by the user then the CalculateAsync method is called. If the response is not Y, Yes, N or No (obviously on an English language setup) then the prompt is displayed with the “Unknown option” reply that we specified and the dialog again waits for input (you can set the number of retries if you want the user to have three attempts, for example, to respond correctly to a prompt).

We can remove the Yes/No buttons by using promptStyle: PromptStyle.None, i.e.

PromptDialog.Confirm(
   context,
   CalculateAsync,
   "Do you want to calculate risk?",
   "Unknown option", 
   promptStyle: PromptStyle.None);

Using the Bot framework

Introduction

If you’re not sure what a Bot is, then check out What is Microsoft Bot Framework Overview.

A Bot doesn’t have to be intelligent and/or understand complex input. Although this sounds the most exciting aspects of using Bots we could also just use a Bot like an API into our services/applications.

What Bots do generally have is some simple user interface for a user to interact with the Bot. We’ve probably heard of or used a “chat bot” in the guise of a text input box which the user types into, but the bot could take input from buttons as well or other control/input mechanisms.

Prerequisites

Whilst we can write our Bot code from scratch we may as well use the Bot items templates which can be downloaded from http://aka.ms/bf-bc-vstemplate

Cortana skills templates can be downloaded from https://aka.ms/bf-cortanaskill-template but are not required for this post.

Once downloaded, place the zip files (for these templates) into %USERPROFILE%\Documents\Visual Studio 2017\Templates\ProjectTemplates\Visual C#.

Now when you start/restart Visual Studio 2017 these templates will be available in New | Project.

Next up you might like to download the Bot framework emulator from Microsoft/BotFramework-Emulator on github. I’m using the botframework-emulator-Setup-3.5.31 version for this and other posts.

Writing my first Bot

  • Create New | Project and select Bot application (mine’s named TestBot)
  • Build the project to pull in all the NuGet packages

At this point we have an ASP.NET application or more specifically we’ve created an ASP.NET Web API. The key areas of interest for us (at this point) is the Controllers/MessageController.cs file and the Dialogs/RootDialog.cs file.

At this point the template has created an Echo bot. We can run up this Bot application and you’ll be presented with a web page stating we need to register our Bot. Ofcourse we’d probably prefer to test things out first on our local machine without this extra requirement. We can run the Bot Framework Channel Emulator (as mentioned in the prerequisites section). So let’s see our Bot in action

  • Run you Bot application
  • Run the Bot emulator
  • In the emulator type the following http://localhost:3979/api/messages (obviously replace the localhost:3979 with whatever host/port you are running your Bot application from).
  • If all worked, simply type some message into the “Type your message…” textbox and the Bot your respond with “You sent which was

If you click on either the message or response, the emulator will display the JSON request and response data.

What’s the code doing?

We’ve seen that we can communicate with our Bot application but what’s really happening?

Obviously the Bot framework handles a lot of the communications for us. I’m not going to go too in depth with this side of things for the simple reason that, at the moment, I don’t know enough about how it works to confidently state how everything fits together. So hopefully I’ll get around to revisiting this question at another time.

I mentioned that MessagesController.cs was a key file. This is where we received messages and also where system messages can be handled. Think of it as the entry point to your application. In the template echo Bot we have this code

if (activity.Type == ActivityTypes.Message)
{
   await Conversation.SendAsync(activity, () => new Dialogs.RootDialog());
}

When a Message type is received (types can be Message, Event, Ping and more, see ActivityTypes).

If you take a look at Dialogs/RootDialog.cs within your solution you’ll see that this implements the IDialog<> interface which requires the Task StartAsync(IDialogContext context) to be implemented. This method basically calls the method MessagedReceivedAsync method (shown below) which is where our “echo” comes from

private async Task MessageReceivedAsync(
   IDialogContext context, 
   IAwaitable<object> result)
{
   var activity = await result as Activity;

   // calculate something for us to return
   int length = (activity.Text ?? string.Empty).Length;

   // return our reply to the user
   await context.PostAsync(
      $"You sent {activity.Text} which was {length} characters");

   context.Wait(MessageReceivedAsync);
}

Making the Bot a little more useful

We’ve looked at generating a echo Bot which is a great starting point but let’s now start code something of our own.

You’ll notice from the code

int length = (activity.Text ?? string.Empty).Length;

This shows that the activity is supplied with the text sent to the Bot via the message input. Obviously in a more complicated interaction, one might send this text to a NPL service, such as Luis or slightly more simply, use RegEx or ofcrouse creating your own parser. But if our Bot was just designed to respond to simple commands, be they sent as messages via a user textual input or from a button press, then the MessageReceivedAsync code could be turned into the equivalent of a switch statement.

Let’s rewrite the MessageReceivedAsync method to act as a basic calculator where we have three commands, using ? to list the commands, and the add and subtract commands, both of which take 2 parameters (we could easily extend these to more parameters or switch to using operators but let’s start simple).

Simply remove the existing MessageReceivedAsync method and replace with the following

private async Task MessageReceivedAsync(
   IDialogContext context, IAwaitable<object> result)
{
   var activity = await result as Activity;

   string response = "I do not understand this command";
   var command = activity
      .Text
      .ToLower()
      .Split(new []{' '}, 
         StringSplitOptions.RemoveEmptyEntries);

   switch (command[0])
   {
      case "?":
         response = ListCommands();
         break;
      case "add":
         response = Add(command[1], command[2]);
         break;
      case "subtract":
         response = Subtract(command[1], command[2]);
         break;
   }

   await context.PostAsync(response);

   context.Wait(MessageReceivedAsync);
}

private string ListCommands()
{
   return "? => list commands\n\nadd a b => adds a and b\n\nsubtract a b => subtract a from b\n\n";
}

private string Add(string a, string b)
{
   double v1, v2;
   if (Double.TryParse(a, out v1) && Double.TryParse(b, out v2))
   {
      return (v1 + v2).ToString();
   }

   return "Non numeric detected";
}

private string Subtract(string a, string b)
{
   double v1, v2;
   if (Double.TryParse(a, out v1) && Double.TryParse(b, out v2))
   {
      return (v2 - v1).ToString();
   }

   return "Non numeric detected";
}

Obviously we’re light on error handling code etc. but you get the idea.

Note: In the ListCommands, to output a new line (i.e. display the list of commands in a multi-line response using the emulator) we need to use two newline characters. We could call context.PostAsync multiple times but this would appear as different message responses.

Fixie, convention based unit testing

I’m always looking around at frameworks/libraries etc. to see if they enhance my development experience or offer new tools/capabilities that could be of use. I’m not wholly sure how I came across Fixie, but I wanted to check it out to see what it had to offer.

Introduction

Fixie is a convention based unit testing library, think NUnit or xUnit but without attributes.

It’s amusing that after re-reading the sentence above, I totally missed the irony that Fixie is really more of a throw back to jUnit with it’s convention based testing than the .NET equivalents with their attributes.

I’m not usually a big fan of convention based libraries, but there’s no doubt that if you’re happy with the convention(s) then code can look pretty clean.

The coolest thing with Fixie (well it’s cool to me) is that you simply add the nuget package to your solution, but don’t need to add a using clause that references Fixie (unless you wish to switch from the default conventions).

Fixie is immediately (or near as immediately) recognized by the Visual Studio Test Runner, but Resharper’s test runner, sadly, does not work with it “out of the box” (I believe there is a Resharper test runner, but I’ve not tried it and it didn’t appear in their extension manager, so I’m not sure what the state is of this at the time of writing.

Default Convention

The default convention is that your test class name is suffixed with the word Tests and methods can be named anything, but must return either void or async and be public, i.e.

public class MatrixTests
{
   public void AddMatrix()
   {
      // test based code
   }

   public async Task SubtractMatrix()
   {
      // test based code
   }
}

I’ve purposefully not written any assertion code because, Fixie does not supply any. Such code should be supplied using the likes of Should, Shouldly, FluentAssertions etc.

Custom Convention

Fixie gives us the tools to define our own custom convention if we want to change from the default. This includes the ability to not only change the naming conventions but also to include attributes to mark tests (like NUnit/xUnit uses). In fact, with this capability Fixie can be used to run NUnit or xUnit style tests (as we’ll see later). For now let’s just create our own test attribute

AttributeUsage(AttributeTargets.Method)]
public class TestAttribute : Attribute
{
}

So this attribute is only declared on methods. To implement a new convention we simple create a class derived from Fixies.Convention, for example

public class TestConvention : Convention
{
   public TestieConvention()
   {
      Methods.HasOrInherits<TestAttribute>();
   }
}

so now our tests might look like this

public class MatrixTests
{
   [Test]
   public void AddMatrix()
   {
      // test based code
   }

   [Test]
   public async Task SubtractMatrix()
   {
      // test based code
   }
}

So in essence, using Fixie, we could create our own test framework (of sorts) and using (as already mentioned) Should, Shouldly, Fluent Assertions etc. libraries to handle asserts. However, I’m sure that’s ultimately not what this functionality is for, but instead allows us to define our own preferred conventions.

See also Custom Conventions

NUnit & xUnit Convention

I mentioned that we could use Fixie to execute against other testing frameworks and on the Fixie github repository, we can find two such convention classes (and support classes) to allow Fixie to run NUnit or xUnit style tests.

See NUnit Style and xUnit Style.

I doubt the intention here is to replace NUnit or xUnit with Fixie as obviously you’d need to support far more capabilities, but these convention repositories give a great overview of what can be done on the Fixie convention front.