Author Archives: purpleblob

Getting started with Bond

What’s Bond?

The Microsoft github repos. for Bond states that “Bond is an open source, cross-platform framework for working with schematized data. It supports cross-language serialization/deserialization and powerful generic mechanisms for efficiently manipulating data.”

To put it another way, Bond appears to be similar to Google’s protocol buffers. See Why Bond? for more on what Bond is.

At the time of writing, out of the box, Bond is supported with C++. C# and Python language bindings.

Jumping straight in

Let’s jump straight in an write some code. Here’s the steps to create our project

  • For this example, let’s create a Console project in Visual Studio
  • Using NuGet add the Bond.CSharp package

Now we’ll define our schema using Bond’s IDL. This should be saved with the .bond extension, so in my case, this code is in Person.bond

namespace Sample

struct Person
{
    0: string FirstName;
    1: string LastName;
    2: int32 Age;
}

Notice that we create a namespace and then use a Bond struct to define our data. Within the struct, each item of data is preceded with an numeric id (see IDL Syntax for more information).

Now, we could obviously write the code to represent this IDL, but it’s be better still if we can generate the source code from the IDL. When we added the NuGet Bond.CSharp package we also got a copy of gbc, which is the command line tool for this purpose.

Open up a cmd prompt and locate gbc (mine was installed into \packages\Bond.CSharp.5.0.0\tools). From here we run the following

gbc c# <Project>\Person.bond -o=<Project>

Replace <Project> with the file path of your application and where your .bond file is located..

This command will generate the source files from the Person.bond IDL and output (the -o switch) to the root of the project location.

Now we need to include the generated files in our project – mine now includes Person_interfaces.cs, Person_proxies.cs, Person_services.cs and Person_types.cs. In fact we only need the Person_types.cs for this example. This includes the C# representation of our IDL and looks (basically) like this

public partial class Person
{
   [global::Bond.Id(0)]
   public string FirstName { get; set; }

   [global::Bond.Id(1)]
   public string LastName { get; set; }

   [global::Bond.Id(2)]
   public int Age { get; set; }

   public Person()
      : this("Sample.Person", "Person")
   {}

   protected Person(string fullName, string name)
   {
      FirstName = "";
      LastName = "";
   }
}

Let’s now look at some code for writing a Person to the Bond serializer.

Note: There is an example of serialization code in the guide to Bond. This shows the helper static methods, Serializer.To and Deserializer.From, however these are not the most optimal for non-trivial code, so I’ll ignore those for my example.

Using clause

Bond includes a Bond.IO.Safe namespace and a Bond.IO.Unsafe, according to the documentation the Unsafe namespace includes the fastest code. So For this example I’m using Bond.IO.Unsafe.

How to write an object to Bond

var src = new Person
{
   FirstName = "Scooby",
   LastName = "Doo",
   Age = 7
};

var output = new OutputBuffer();
var writer = new CompactBinaryWriter<OutputBuffer>(output);
var serializer = new Serializer<CompactBinaryWriter<OutputBuffer>>(typeof(Person));

serializer.Serialize(src, writer);

The Serialize.To code allows us to dispense with the serializer, but the initial call to this creates the serializer which can take a performance hit if used inside a loop or the likes, hence creating the serializer upfront and using this instance in any loops would provide better overall performance.

How to read an object from Bond

var input = new InputBuffer(output.Data);
var reader = new CompactBinaryReader<InputBuffer>(input);
var deserializer = new Deserializer<CompactBinaryReader<InputBuffer>>(typeof(Person));

var dst = deserializer.Deserialize(reader);

In the above code we’re getting the input from the OutputBuffer we created from writing data, although this is just to demonstrate usage. The InputBuffer can take a byte[] representing the data to be deserialized.

Where possible InputBuffer’s and OutputBuffer’s should also be reused, simply set the buffer.Position = 0 to reset them after use.

Serialization Protocols

In the previous code we’ve used the CompactBinary classes which implements binary serialization (optimized for compactness, as the name suggests), but there are several other serialization protocols.

FastBinaryReader/FastBinaryWriter, classes are optimized for speed, and easily plug into our sample code like this

var writer = new FastBinaryWriter<OutputBuffer>(output);
var serializer = new Serializer<FastBinaryWriter<OutputBuffer>>(typeof(Person));

and

var reader = new FastBinaryReader<InputBuffer>(input);
var deserializer = new Deserializer<FastBinaryReader<InputBuffer>>(typeof(Person));

SimpleBinaryReader/SimpleBinaryWriter, classes offer potential for a saving on the payload size.

var writer = new SimpleBinaryWriter<OutputBuffer>(output);
var serializer = new Serializer<SimpleBinaryWriter<OutputBuffer>>(typeof(Person));

and

var reader = new SimpleBinaryReader<InputBuffer>(input);
var deserializer = new Deserializer<SimpleBinaryReader<InputBuffer>>(typeof(Person));

Human readable serialization protocols

At the time of writing, Bond supports the two “human-readable” based protocols, which are XML and JSON.

Let’s look at the changes required to read/write JSON.

The JSON protocol can be used with the .bond file as previously defined, or we can add JsonName attribute to the fields to produce

namespace Sample

struct Person
{
    [JsonName("First")]
    0: string FirstName;
    [JsonName("Last")]
    1: string LastName;
    [JsonName("Age")]
    2: int32 Age;
}

if we are supporting Json with named attributes. The easiest way to use the SimpleJsonReade/SimpleJsonWriter is using a string buffer (or StringBuilder in C# terms), so here’s the code to write our Person object to a Json string

var sb = new StringBuilder();
var writer = new SimpleJsonWriter(new StringWriter(sb));
var serializer = new Serializer<SimpleJsonWriter>(typeof(Person));

serializer.Serialize(src, writer);

to deserialize the string back to an object we can use

var reader = new SimpleJsonReader(new StringReader(sb.ToString()));
var deserializer = new Deserializer<SimpleJsonReader>(typeof(Person));

var dst = deserializer.Deserialize(reader);

The XML protocol can be used with the original .bond file (or the Json one as the JsonName attributes are ignored) so nothing to change there. Here’s the code to write our object to XML (again we’re using a string as a buffer)

var sb = new StringBuilder();
var writer = new SimpleXmlWriter(XmlWriter.Create(sb));
var serializer = new Serializer<SimpleXmlWriter>(typeof(Person));

serializer.Serialize(src, writer);

writer.Flush();

and to deserialize the XML we simply use

var reader = new SimpleXmlReader(
     XmlReader.Create(
         new StringReader(sb.ToString())));
var deserializer = new Deserializer<SimpleXmlReader>(typeof(Person));

var dst = deserializer.Deserialize(reader);

Transcoding

The Transcoder allows us to convert “payloads” from one protocol to another. For example, let’s assume we’ve got a SimpleXmlReader representing some XML data and we want to transcode it to a CompactBinaryWriter format, we can do the following

var reader = new SimpleXmlReader(XmlReader.Create(new StringReader(xml)));

var output = new OutputBuffer();
var writer = new CompactBinaryWriter<OutputBuffer>(output);

var transcode = new Transcoder<
   SimpleXmlReader, 
   CompactBinaryWriter<OutputBuffer>>(
      typeof(Person));

transcode.Transcode(reader, writer);

Now our payload is represented as a CompactBinaryWriter. Obviously this is more useful in scenarios where you have readers and writers as opposed to this crude example where we could convert to and from the Person object ourselves.

References

A Young Person’s Guide to C# Bond

TestStack.White Gotcha/Tips

RadioButton Click might not actually change anything

The click method does not actually click on the radio button itself. It’s noticeable where a radio button fills some extra space, in some cases the click will not be over the radio button or the text and thus doesn’t seem to work.

Instead use

var radioButton = window.Get<RadioButton>(SearchCriteria.ByText("One"));
radioButton.SetValue(true);

Assert.IsTrue(radioButton.IsSelected);

What type is a UserControl mapped to in TestStack.White?

WPF UserControl’s maps to the TestStack.White frameworks CustomUIItem. Hence

<UserControl 
   x:Class="MyClass"
   x:Name="myClass">
<!-- Other elements -->
</UserControl>

can be accessed using

var myClassUserControl =
   window.Get<CustomUIItem>(
      SearchCriteria.ByAutomationId("myClass"));

Defining a custom control mapping

When using the generic Get method in TestStack.White, you’re have the ability to convert the automation control to a TestStack.White Label, Button etc. to give the feel of interacting with such capabilities that are exposed by these types of controls.

In the case of a WPF UserControl we see this maps to a CustomUIItem. It might be useful if we were to define a TestStack.White compatible UserControl for use with the Get method (for example).

Let’s firstly look at how TestStack.White source code implements a Label (here’s the source for the Label control)

public class Label : UIItem
{
   protected Label() {}
   public Label(AutomationElement automationElement, 
       IActionListener actionListener) : 
          base(automationElement, actionListener) {}

   public virtual string Text
   {
      get { return (string) Property(AutomationElement.NameProperty); }
   }
}

Now in our case we need to create a similar class but derived from the CustomUIItem, so here’s ours

[ControlTypeMapping(CustomUIItemType.Custom, WindowsFramework.Wpf)]
public class UserControl : CustomUIItem
{
   public UserControl(
      AutomationElement automationElement, 
      ActionListener actionListener)
         : base(automationElement, actionListener)
   {            
   }

   protected UserControl()
   {            
   }
}

According to the Custom UI Items documentation, an Empty constructor is mandatory with protected or public access modifier also required.

The ControlTypeMapping attribute is used to allow TestStack.White to map the return from the Get method to the new UserControl type, for example

var userControl = window.Get<UserControl>(
   SearchCriteria.ByAutomationId("myClass"));

Selecting an item in a ComboBox

The code for selecting an item in a ComboBox is fairly simple in TestStack.White, but when I used it I kept getting exceptions saying something about virtualization pattern.

Luckily as TestStack.White is built upon the MS Automation framework and others have been here before me, this from Stackoverflow worked for me, here’s the code slightly altered to use as an extension method

public static void SelectItem(this ComboBox control, string item)
{
   var listControl = control.AutomationElement;

   var automationPatternFromElement = 
      GetSpecifiedPattern(listControl,
         "ExpandCollapsePatternIdentifiers.Pattern");

   var expandCollapsePattern =
      listControl.GetCurrentPattern(automationPatternFromElement) 
         as ExpandCollapsePattern;
   
   if(expandCollapsePattern != null)
   {
      expandCollapsePattern.Expand();
      expandCollapsePattern.Collapse();

      var listItem = listControl.FindFirst(
          TreeScope.Subtree,
          new PropertyCondition(AutomationElement.NameProperty, item));

      automationPatternFromElement = 
         GetSpecifiedPattern(listItem, 
            "SelectionItemPatternIdentifiers.Pattern");

      var selectionItemPattern =
         listItem.GetCurrentPattern(automationPatternFromElement) 
            as SelectionItemPattern;

      if(selectionItemPattern != null)
      {
         selectionItemPattern.Select();
      }
   }
}

private static AutomationPattern GetSpecifiedPattern(
   AutomationElement element, string patternName)
{
   return element.GetSupportedPatterns()
      .FirstOrDefault(pattern => 
         pattern.ProgrammaticName == patternName);
}

UI Automation Testing with TestStack.White

TestStack.White is based on the UI Automation libraries (see UI Automation), offering a simplification of such methods for automating a UI and allowing us to write unit tests against such UI automation.

Getting Started

Let’s jump straight in and write a simply UI automation unit test around the Calc.exe application.

  • Create a new C# Unit Test project (or class library, adding your favoured unit testing framework)
  • Install the TestStack.White nuget package

Let’s begin by creating a simple test method which starts the Calc.exe application, get’s access to the calculator window and then disposes of it, we’ll obviously insert code into this test to do something of value soon, but for now, here’s the basics

[TestMethod]
public void TestMethod1()
{
   using(var application = Application.Launch("Calc.exe"))
   {
      var calculator = application.GetWindow("Calculator", InitializeOption.NoCache);

      // do something with the application

      application.Close();
   }
}

Well that doesn’t do anything too exciting, it runs Calc.exe and then closes it, but now we can start interacting with an instance of the calculator’s UI using TestStack.White.

Let’s start by getting the button with the number 7 and click/press it.

var b7 = calculator.Get<Button>(SearchCriteria.ByText("7"));
b7.Click();

By using the Get method with the generic parameter Button, we get back a button object which we can interact directly with. The SearchCriteria allows us to try to find UI control in the Calculator with the text (in this case) 7. As is probably quite obvious, we call the Click method on this button object to simulate a button click event.

We can’t always get as controls by their text so using Spy++ and using the cross-hair/find window tool we can find the “Control ID” (which is in hex.) and we can instead find a control via this id (White calls this the automation id) hence

var plus = calculator.Get<Button>(
       SearchCriteria.ByAutomationId(
           0x5D.ToString()));
plus.Click();

So let’s look at a completed and very simply unit test to see that we can add two numbers and the output (on the screen) is expected

var b7 = calculator.Get<Button>(
   SearchCriteria.ByText("7"));
b7.Click();

var plus = calculator.Get<Button>(
   SearchCriteria.ByAutomationId(
      0x5D.ToString()));
plus.Click();

var b3 = calculator.Get<Button>(
   SearchCriteria.ByText("5"));
b3.Click();

var eq = calculator.Get<Button>(
   SearchCriteria.ByAutomationId(
      0x79.ToString()));
eq.Click();

var a = calculator.Get(
   SearchCriteria.ByAutomationId(
      0x96.ToString()));

var r = a.Name;
Assert.AreEqual("12", r);

Managed applications

In the above example we uses Spy++ to get control id’s etc. for WPF we can use the utility, Snoop and for the automation id use the name of the control, for example

var searchBox = pf.Get<TextBox>(
   SearchCriteria.ByAutomationId("SearchBox"));

where SearchBox is the name associated with the control.

References

http://teststackwhite.readthedocs.io/en/latest/
https://github.com/TestStack/White

Same XamDataGrid different layouts for different types

In some cases you might be using Infragistic’s XamDataGrid with differing types. For example, maybe a couple of types have the same base class but each have differing properties that you need the grid to display or maybe you have heterogeneous data which you want to display in the same grid.

To do this we simply define different field layouts within the XamDataGrid and use the Key property to define which layout is used for which type.

Let’s look at a simple example which will display two totally different sets of columns for the data. Here’s the example classes

public class Train
{
   public string Route { get; set; }
   public int Carriages { get; set; }
}

public class Car
{
   public string Make { get; set; }
   public string Model { get; set; }
   public float EngineSize { get; set; }
}

as you can see, the classes do not share a common base class or implement a common interface. If we set up our XamDataGrid like this

<ig:XamDataGrid DataSource="{Binding}">
   <ig:XamDataGrid.FieldLayouts>
      <ig:FieldLayout Key="Train">
         <ig:Field Name="Route" />
         <ig:Field Name="Carriages" />
      </ig:FieldLayout>

      <ig:FieldLayout Key="Car">
         <ig:Field Name="Make" />
         <ig:Field Name="Model" />
         <ig:Field Name="EngineSize" />
      </ig:FieldLayout>
   </ig:XamDataGrid.FieldLayouts>
</ig:XamDataGrid>

we can then supply an IEnumerable (such as an ObservableCollection) with all the same type, i.e. Car or Train objects or a mixture of both.

The Key should have the name of the type which it’s field layout applies to. So for example, when Train objects are found in the DataSource, the Train FieldLayout is used hence the columns Route and Carriages will be displayed, likewise when Car objects are found the Car layout is used, thus Make, Model and EngineSize are displayed.

Note: The field layout is used for each row, i.e. the grid control doesn’t group all Trains together and/or all Cars, the rows are displayed in the order of the data and thus the field layouts are displayed each time the object type changes.

Dynamic Proxies with Castle.DynamicProxy

I’ve recently had to look at updating our very old version of The Castle.DynamicProxy to a more recent version and things have changed a little, so I thought it’d be a perfect excuse to write a little blog post about dynamic proxies and the Castle.DynamicProxy in particular.

What is a dynamic proxy?

Let’s begin with a simple definition – a proxy acts as an interception mechanism to a class (or interface), in a transparent way and can allow the developer to intercept calls to the original class and add or change functionality on the original class. For example, NHibernate uses them for lazy loading and mocking frameworks use them to intercept method/property calls.

Sounds great, what are the pitfuls?

The primary pitful of a dynamic proxy is that can added tot he overal memory footprint of your application if used too liberally. But if it supplies the functionality you require then this probably isn’t an issue, especially with 64-bit memory limits. Obviously they add an element of complexity which can become a pain to debug through, ofcourse there’s always trade offs.

Let’s see some code

We’re going to use the Castle.Core nuget package for this example, so create yourself a Console application, add this package to your references and then we’re good to go.

Proxies in remoting require you to derive your class from MarshalByRefObject, but this is not practical if you are unable to change the base class of your class. With Castle.DynamicProxy we can proxy our class without changing the base class, although we will need the class members to be virtual to use this code.

We’re going to create an interceptor which, as the name suggests will be used to intercept calls to our object by the dynamic proxy and in this case we’ll log to Console the method/property called.

public class Interceptor : IInterceptor
{
   public void Intercept(IInvocation invocation)
   {
      Console.WriteLine($"Before target call {invocation.Method.Name}" );
      try
      {
         invocation.Proceed();
      }
      catch (Exception e)
      {
         Console.WriteLine($"Target exception {ex.Message}");
         throw;
      }
      finally
      {
         Console.WriteLine($"After target call {invocation.Method.Name}");
      }
   }
}

Now let’s create a simple class to use to demo this, it’ll have both a method and property to get a flavour for how these should look

public class MyClass
{
   public virtual bool Flag { get; set; }

   public virtual void Execute()
   {
      Console.WriteLine("Execute method called");
   }
}

Simple enough – notice we need to mark these property and method as virtual, also notice we’ve done nothing else to the class to show it’s going to be used in a proxy scenario.

Finally let’s see the code to proxy this class and change the property and run the method

var proxy = new ProxyGenerator()
   .CreateClassProxy<MyClass>(
       new Interceptor());
proxy.Flag = true;
proxy.Execute();

That’s it. The output from running this in a Console will be

Before target call set_Flag
After target call set_Flag
Before target call Execute
Execute method called
After target call Execute

The Flag property setter is run, followed by the Execute method, both of which are intercepted.

We can also intercept interfaces (as you’d expect as dynamic proxies are used in mocking frameworks). However, your interceptor would need to mimic the functionality of an implementation of the interface. So for this example comment out the invocation.Proceed(); call in the interceptor.

Here’s a simple interface

public interface IPerson
{
   string FirstName { get; set; }
   string LastName { get; set; }
}

Now our code for executing our proxy against this interface would look like this

var proxy = new ProxyGenerator()
   .CreateInterfaceProxyWithoutTarget<IPerson>(
      new Interceptor());
proxy.FirstName = "Scooby";
proxy.LastName = "Doo";

The output will show the calls to the interface property setters. We can create a dynamic proxy to an interface but supply the underlying target by implementing the interface and supplying an instance to the proxy generator – so uncomment the invocation.Proceed(); line in the interceptor, implement the IPerson interface, such as

public class Person : IPerson
{
   public string FirstName { get; set; }
   public string LastName { get; set; }
}

and now our proxy generator code can be change to this

var proxy = (IPerson)new ProxyGenerator()
   .CreateInterfaceProxyWithTarget(
      typeof(IPerson), 
      new Person(),
      new Interceptor());
proxy.FirstName = "Scooby";
proxy.LastName = "Doo";

in this example, we’ve not made our implementation properties virtual, and the Person setters will be invoked via the interceptor.

In this case the proxy is based upon the interface and simply calls the “target” object properties/methods. Hence this forwarding of calls means the target object does not need to have methods/properties marked as virtual.

A gotcha here is that all calls to the target must go through the proxy to be intercepted, this means that if your target call’s a method on itself, this will not be intercepted. To see this in action, let’s assume our IPerson now has a method void Change() and the implementation of this sets the FirstName to some value. So it looks like this

public void Change()
{
   FirstName = "Scrappy";
}

Now if you call the proxy Change method, it will be intercepted and our logging will be displayed but when it proceeds with the Change method (above), the the call to the FirstName setter will not be intercepted as this is run on the target not the proxy – hopefully that makes sense.

Scientist in the making (aka using Science.NET)

When we’re dealing with refactoring legacy code, we’ll often try to ensure the existing unit tests (if they exist) or new ones cover as much of the code as possible before refactoring it. But there’s always a concern about turning off the old code completely until we’ve got a high confidence in the new code. Obviously the test coverage figures and unit tests themselves should give us that confidence, but wouldn’t it by nice to maybe we instead ran the old and new code in parallel and compare the behaviour or at least the results of the code? This is where the Scientist library comes in.

Note: This is very much (from my understanding) in an alpha/pre-release stage of development, so any code written here may differ from the way the library ends up working. So basically what I’m saying is this code works at the time of writing.

Getting started

So the elevator pitch for Science.NET is that it “allows us to two difference implementations of code, side by side and compare the results”. Let’s expand on that with an example.

First off, we’ll set-up our Visual Studio project.

  • Create a new console application (just because its simple to get started with)
  • From the Package Manager Console, execute Install-Package Scientist -Pre

Let’s start with a very simple example, let’s assume we have a method which returns a numeric value, we don’t really need to worry much about what this value means – but if you like a back story, let’s assume we import data into an application and the method calculates the confidence that the data matches a known import pattern.

So the legacy code, or the code we wish to verify/test against looks like this

public class Import
{
   public float CalculateConfidenceLevel()
   {
       // do something clever and return a value
       return 0.9f;
   }
}

Now our new Import class looks like this

public class NewImport
{
   public float CalculateConfidenceLevel()
   {
      // do something clever and return a value
      return 0.4f;
   }
}

Okay, okay, I know the result is wrong, but this is mean’t to demonstrate the Science.NET library not my Import code.

Right, so what we want to do is run the two versions of the code side-by-side and see whether the always give the same result. So we’re going to simply run these in our console’s Main method for now but ofcourse the idea is this code would be run from wherever you currently run the Import code from. For now just add the following to Main (we’ll discuss strategies for running the code briefly after this)

var import = new Import();
var newImport = new NewImport();

float confidence = 
   Scientist.Science<float>(
      "Confidence Experiment", experiment =>
   {
      experiment.Use(() => import.CalculateConfidenceLevel());
      experiment.Try(() => newImport.CalculateConfidenceLevel());
   });

Now, if you run this console application you’ll see the confidence variable will have the value 0.9 in it as it’s used the .Use code as the result, but the Science method (surely this should be named the Experiment method :)) will actually run both of our methods and compare the results.

Obviously as both the existing and new implementations are run side-by-side, performance might be a concern for complex methods, especially if running like this in production. See the RunIf method for turning on/off individual experiments if this is a concern.

The “Confidence Experiment” string denotes the name of the comparison test and can be useful in reports, but if you ran this code you’ll have noticed everything just worked, i.e. no errors, no reports, nothing. That’s because at this point the default result publisher (which can be accessed via Scientist.ResultPublisher) is an InMemoryResultPublisher we need to implement a publisher to output to the console (or maybe to a logger or some other mechanism).

So let’s pretty much take the MyResultPublisher from Scientist.net but output to console, so we have

 public class ConsoleResultPublisher : IResultPublisher
{
   public Task Publish<T>(Result<T> result)
   {
      Console.WriteLine(
          $"Publishing results for experiment '{result.ExperimentName}'");
      Console.WriteLine($"Result: {(result.Matched ? "MATCH" : "MISMATCH")}");
      Console.WriteLine($"Control value: {result.Control.Value}");
      Console.WriteLine($"Control duration: {result.Control.Duration}");
      foreach (var observation in result.Candidates)
      {
         Console.WriteLine($"Candidate name: {observation.Name}");
         Console.WriteLine($"Candidate value: {observation.Value}");
         Console.WriteLine($"Candidate duration: {observation.Duration}");
      }

      if (result.Mismatched)
      {
         // saved mismatched experiments to DB
      }

      return Task.FromResult(0);
   }
}

Now insert the following before the float confidence = line input our Main method

Scientist.ResultPublisher = new ConsoleResultPublisher();

Now when you run the code you’ll get the following output in the console window

Publishing results for experiment 'Confidence Experiment'
Result: MISMATCH
Control value: 0.9
Control duration: 00:00:00.0005241
Candidate name: candidate
Candidate value: 0.4
Candidate duration: 00:00:03.9699432

So now you’ll see where the string in the Science method can be used.

More…

Checkout the documentation on Scientist.net of the source itself for more information.

Real world usage?

First off let’s revisit how we might actually design our code to use such a library. The example was created from scratch to demonstrate basic use of the library, but it’s more likely that we’d either create an abstraction layer which instantiates and executes the legacy and new code or if available add the new method to the legacy implementation code. So in an ideal worlds our Import and NewImport methods might implement an IImport interface. Thus it would be best to implement a new version of this interface and within the methods call the Science code, for example

public interface IImport
{
   float CalculateConfidenceLevel();
}

public class ImportExperiment : IImport
{
   private readonly IImport import = new Import();
   private readonly IImport newImport = new Import();

   public float CalculateConfidenceLevel()
   {
      return Scientist.Science<float>(
         "Condfidence Experiment", experiment =>
         {
            experiment.Use(() => import.CalculateConfidenceLevel());
            experiment.Try(() => newImport.CalculateConfidenceLevel());
         });
   }
}

I’ll leave the reader to put the : IImport after the Import and NewImport classes.

So now our Main method would have the following

Scientist.ResultPublisher = new ConsoleResultPublisher();

var import = new ImportExperiment();
var result = import.CalculateConfidenceLevel();

Using an interface like this now means it’s both easy to switch from the old Import to the experiment implementation and eventually to the new implementation, but then hopefully this is how we always code. I know those years of COM development make interfaces almost the first thing I write along with my love of IoC.

And more…

Comparison replacement

So the simple example above demonstrates the return of a primitive/standard type, but what if the return is one of our own more complex objects and therefore more complex comparisons? We can implement an

experiment.Compare((a, b) => a.Name == b.Name);

ofcourse we could hand this comparison off to a more complex predicate.

Unfortunately the Science method expects a return type and hence if your aim is to run two methods with a void return and maybe test some encapsulated data from the classes within the experiment, then you’ll have to do a lot more work.

Toggle on or off

The IExperiment interface which we used to call .Use and .Try also has the method RunIf which I mentioned briefly earlier. We might wish to write our code in such a way that the dev environment runs the experiments but production does not, ensuring our end user’s do not suffer performances hits due to the experiment running. We can use RunIf in the following manner

experiment.RunIf(() => !environment.IsProduction);

for example.

If we needed to include this line in every experiment it might be quite painful, so it’s actually more likely we’d use this to block/run specific experiments, so maybe we run all experiments in all environment, except one very slow experiment.

To enable/disable all experiments, instead we can use

Scientist.Enabled(() => !environment.IsProduction);

Note: this method is not in the NuGet package I’m using but is in the current source on GitHub and in the documentation so hopefully it works as expected in a subsequent release of the NuGet package.

Running something before an experiment

We might need to run something before an experiment starts but we want the code within the context of the experiment, a little like a test setup method, we can use

experiment.BeforeRun(() => BeforeExperiment());

in the above we’ll run some method BeforeExperiment() before the experiment continues.

Finally

I’ve not covered all the currently available methods here as the Scientist.net repository already does that, but hopefully I’m given a peek into what you might do with this library.

NPOI saves the day

Introduction

NPOI is a port of POI for .NET. You know how we in the .NET side like to prefix with N or in the case of JUnit, change J to N for our versions of Java libraries.

NPOI allows us to write Excel files without Excel needing to be installed. By writing files directly it also gives us, speed, less likelihood or us leaving a Excel COM/Automation object in memory and basically a far nicer API.

So how did NPOI save the day?

I am moving an application to WPF and in doing so the third party controls also moved from WinForms to WPF versions. One, a grid control, used to have a great export to Excel feature which output the data in a specific way, unfortunately the WPF version did not write the Excel file in the same format. I was therefore tasked with re-implementing the Excel exporting code. I began with Excel automation which seemed slow and I found it difficult getting the output as we wanted. I then tried a couple of Excel libraries for writing the BIFF format (as used by Excel). Unfortunately these didn’t fully work and/or didn’t do what I needed. Then one of my Java colleagues mentioned POI and checked for an N version of POI, and there it was NPOI. NPOI did everything we needed, thus saving the day.

Let’s see some code

Okay usual prerequisites are

  • Create a project or whichever type you like
  • Using NuGet add the NPOI package

Easy enough.

Logically enough, we have workbooks at the top level with worksheet’s within a workbook. Within the worksheet we have rows and finally cells within the rows, all pretty obvious.

Let’s take a look at some very basic code

var workbook = new XSSFWorkbook();
var worksheet = workbook.CreateSheet("Sheet1");

var row = worksheet.CreateRow(0);
var cell = row.CreateCell(0);

cell.SetCellValue("Hello Excel");

using (var stream = new FileStream("test.xlsx", FileMode.Create, FileAccess.Write))
{
   workbook.Write(stream);
}

Process.Start("test.xlsx");

The above should be pretty self explanatory, after creating the workbook etc. we write the workbook to a file and then using Process, we get Excel to display ht file we’ve created.

Autosizing columns

By default you might feel the columns are too thin, we can therefore iterate over the columns after setting our data and run

for (var c = 0; c < worksheet.GetRow(0).Cells.Count; c++)
{
   worksheet.AutoSizeColumn(c);
}

The above code is simply looping over the columns (I’ve assumed row 0 holds headings for each column – as it were#) and telling the worksheet o auto-size them.

Grouping rows

One thing we have in our data is a need to show parent child relationships in the Excel spreadsheet. Excel allows us to do this by “grouping” rows. For example, if we have

Parent
Child1
Child2

We’d like to show this in Excel in collapsible rows, like a treeview. As such we want the child curves to be within the group so we’d see something like this

+Parent

or expanded

-Parent
Child1
Child2

to achieve this in NPOI (assuming Parent is row 0) we would group row’s 1 and 2, i.e.

worksheet.GroupRow(1, 2);
//if we want to default the rows to collapsed use
worksheet.SetRowGroupCollapsed(1, true);

finally for grouping, the +/- button by default displays at the bottom of the grouping which I always found a little strange, so to have this display at the top of the group we set this via

worksheet.RowSumsBelow = false;

Date format

You may wish to customise the way DateTime’s are displayed, in which case we need to apply a style to the cell object, for example, let’s display the DateTime in the format dd mm yy hh:mm

var creationHelper = workbook.GetCreationHelper();

var cellStyle = workbook.CreateCellStyle();
cellStyle.DataFormat = creationHelper
   .CreateDataFormat()
   .GetFormat("dd mmm yy hh:mm");
cellStyle.Alignment = HorizontalAlignment.Left;

// to apply to our cell we use
cell.CellStyle = cellStyle;

References

https://github.com/tonyqus/npoi

Adding a WebApi controller to an existing ASP.NET MVC application

So I’ve got an existing ASP.NET MVC5 application and need to add a REST api using WebApi.

  • Add a new Controller
  • Select Web API 2 Controller – Empty (or whatever your preference is)
  • Add you methods as normal
  • Open Global.asax.cs and near the start, for example after AreaRegistration but before the route configuration, add
    GlobalConfiguration.Configure(WebApiConfig.Register);
    

easy enough. The key is to not put the GlobalConfiguration as the last line in the Global.asax.cs as I did initially.

If we assume your controller was named AlbumsController, it might looks something like this

public class AlbumsController : ApiController
{
   // api/albums
   public IEnumerable<Album> GetAllAlbums()
   {
      // assuming albums is populated 
      // with a list of Album objects
      return albums;
   }
}

as per the comment, access to the API will be through url/api/albums, see WebApiConfig in App_Start for the configuration of this URL.

Passing arguments to an ASP.NET MVC5 controller

In our controller we might have a method along the lines

public string Search(string criteria, bool ignoreCase = true)
{
   // do something useful
   return $"Criteria: {criteria}, Ignore Case: {ignoreCase}";
}

Note: I’ve not bothered using HttpUtility.HtmlEncode on the return string as I want to minimize the code for these snippets.

So we can simply create a query string as per

http://localhost:58277/Music/Search?criteria=something&ignoreCase=false

or we can add/change the routing in RouteConfig, so for example in RouteConfig, RegisterRoutes we add

routes.MapRoute(
   name: "Music",
   url: "{controller}/{action}/{criteria}/{ignoreCase}"
);

now we can compose a URL thus

http://localhost:58277/Music/Search/something/false

Note: the routing names /{criteria}/{ignoreCase} must have the same names as the method parameters.

Obviously this example is a little contrived as we probably wouldn’t want to create a route for such a specific method signature.

We might simply incorporate partial parameters into the routine, for example maybe all our MusicController methods took a citeria argument then we might use

routes.MapRoute(
   name: "Music",
   url: "{controller}/{action}/{criteria}"
);

Note: there cannot be another route with the same number of parameters in the url preceding this or it will not be used.

and hence our URL would like like

http://localhost:58277/Music/Search/something?ignoreCase=false

ASP.NET MVC and IoC

This should be a nice short post.

As I use IoC a lot in my desktop applications I also want similar capabilities in an ASP.NET MVC application. I’ll use Unity as the container initally.

  • Create a new project using the Templates | Web | ASP.NET Web Application option in the New Project dialog in Visual Studio, press OK
  • Next Select the MVC Template and change authentication (if need be) and check whether to host in the cloud or not, then press OK
  • Select the References section in your solution explorer, right mouse click and select Manage NuGet Packages
  • Locate the Unity.Mvc package and install it

Once installed we need to locate the App_Start/UnityConfig.cs file and within the RegisterTypes method we add our mappings as usual, i.e.

container.RegisterType<IServerStatus, ServerStatus>();

There are also other IoC container NuGet packages including NInject (NInject.MVCx), with this we simply install the package relevent to our version of MVC, for example NInject.MVC4 and now we are supplied with the App_Start/NinjectWebCommon.cs file where we can use the RegisterServices method to register our mappings, i.e.

kernel.Bind<IServerStatus>().To<ServerStatus>();

More…

See Extending NerdDinner: Adding MEF and plugins to ASP.NET MVC for information on using MEF with ASP.NET.