A quick look at WPF Localization Extensions

Continuing my look into localizing WPF application…

I’ve covered techniques used by theme’s to change a WPF application’s resources for the current culture. I’ve looked at using locbaml for extracting resources and localizing them. Now I’m looking at WPF Localization Extensions.

The WPF localization extension take the route of good old resx files and markup extensions (along with other classes) to implement localization of WPF applications.

Getting Started

  • Create a WPF application
  • Using NuGet, install the WPFLocalizeExtension
  • Open the Properties section and copy Resources.resx and rename the copy Resources.fr-FR.resx (or whatever culture you wish to support)

As with my other examples of localizing WPF applications, I’m not going to put too much effort into developing a UI as it’s the concepts I’m more interested in at this time.

First off let’s add the following XAML to MainWindow.xaml within the Window namspaces etc.


The DefaultDictionary needs to have the name of the resource file, whether it’s Resources (exluding the .resx) or if you’ve created one names Strings or whatever, just exclude the extension.

The DefaultAssembly is the name of the assembly to be used as the default for resources, i.e. in this case it’s the name of my project’s assembly.

Next up, within the Grid, we’re going to have this

<TextBlock Text="{lex:Loc Header}" />

Header is a key – obviously on a production ready application with more than one string to translate, we’d probably implement a better naming convention.

A tell tale sign things are wrong is you’ll see the text displayed as Key: Header, this likely points to one of the namespace values being incorrect, such as the DefaultDictionary name.

That’s it for the UI.

In the Resources.resx, add a string named Header and give it a Value, for example mine should default to English and hence will have Hello as the value. In the Resources.fr-FR.resx, add a Header name and give it the value Bonjour.

That’s the extent of our strings for this application. If you run the application you should see the default Resources string, “Hello”. So now let’s look at testing the fr-FR culture.

In App.xaml.cs create a default constructor and place this code within it

LocalizeDictionary.Instance.SetCurrentThreadCulture = true;
LocalizeDictionary.Instance.Culture = new CultureInfo("fr-FR");

Run the application again and the displayed string will be take from the fr-FR resource file.

To allow us to easily switch, at runtime, between cultures, we can use the LocalizeDictionary. Here’s a combo box selector to do this (taken from the sample source on the WPF Localization Extensions github page).

<ComboBox ItemsSource="{Binding Source={x:Static lex:LocalizeDictionary.Instance}, Path=MergedAvailableCultures}"
   SelectedItem="{Binding Source={x:Static lex:LocalizeDictionary.Instance}, Path=Culture}" DisplayMemberPath="NativeName" />

We also need to be able get strings from the selected resource in our code, here’s a simple static class from StackOverflow which allows us to get a string (or other type) from the currently selected resources

public static class LocalizationProvider
    public static T GetLocalizedValue<T>(string key)
        return LocExtension.GetLocalizedValue<T>
(Assembly.GetCallingAssembly().GetName().Name + ":Resources:" + key);

The string “Resources” should obviously be changed to the name of your resource files (for example if you’re using just strings in a “Strings” resource etc).

This is certainly simpler to set-up than locbaml, the obvious drawback with this approach is that the strings at design time are not very useful. But if, like me, you tend to code WPF UI primarily in XAML then this probably won’t concern you.

Localizing a WPF application using locbaml

This post is going to mirror the Microsoft post How to: Localize an Application but I’ll try to add something of value to it.

Getting Started

Let’s create a WPF Application, mine’s called Localize1. In the MainWindow add one or more controls – I’m going basic at this point with the following XAML within the Window element of MainWindow.xaml


According to the Microsoft “How To”, we now place the following line in the csproj


So locate the end of the <PropertyGroup> elements and put the following


See AssemblyInfo.cs comment on using the UICulture element also

Obviously put the culture specific to your default locale, hence mine’s en-GB. Save the altered csproj and ofcourse reload in Visual Studio if you have the project loaded.

The inclusion of the UICulture will result (when the application is built) in a folder en-GB (in my case) with a single satellite assembly created, named Localize1.resources.dll.

Next we’re going to use msbuild to generate Uid’s for our controls. So from the command line in the project folder of your application run

msbuild /t:updateuid Localize1.csproj

Obviously replace the project name with your own. This should generate Uid’s for controls within your XAML files. They’re not very descriptive, i.e. Grid_1, TextBlock_1 etc. but we’ll stick with following the “How To” for now. Ofcourse you can implement your own Uid’s and either use msbuild /t:updateuid to generate any missing Uid’s or ignore them – and have Uid’s for those controls you wish to localize only.

We can also verify that Uid’s exist for our controls by running

msbuild /t:checkuid Localize1.csproj

At this point we’ve generated Uid’s for our controls and msbuild generated as part of the compilation a resource DLL for the culture we assigned to the project file.

We now need to look at generating alternate language resource.

How to create an alternate language resource

We need to download the LocBaml tool or it’s source. I had problems locating this but luckily found source on github here.

So if you don’t have LocBaml already, download and build the source and drop the locbaml.exe into your bin\debug folder. Now run the following command

locbaml.exe /parse en-GB/Localize1.resources.dll /out:Localize1.csv

You could ofcourse copy the locbaml.exe to the en-GB folder in my example if you prefer. What we’re after is for locbaml to generate our Localize1.csv file, which will be then add translated text to.

Here’s what my csv file looks like


If you view this csv in Excel you’ll see 7 columns. These are in the following order (decriptions copied from the How To document)

  • BAML Name. The name of the BAML resource with respect to the source language satellite assembly.
  • Resource Key. The localized resource identifier.
  • Category. The value type.
  • Readability. Whether the value can be read by a localizer.
  • Modifiability. Whether the value can be modified by a localizer.
  • Comments. Additional description of the value to help determine how a value is localized.
  • Value. The text value to translate to the desired culture.

For our translators (if we’re using an external translator to localize our applications) we might wish to supply comments regarding expectations or context for the item to be localized.

So, go ahead and translate the string Hello to an alternate culture, I’m going to change it to Bonjour. Once completed, save the csv as Localize1_fr-FR.csv (or at least in my case translating to French).

Now we want to get locbaml to generate our new satellite assembly for the French language resources, so again from the Debug folder (where you should have the generated csv from the original set of resources as well as the new fr-FR file) create a folder named fr-FR (or whatever your new culture is).

Run the locbaml command

locbaml.exe /generate en-GB/Localize1.resources.dll /trans:Localize1_fr-FR.csv /out:fr-FR /cul:fr-FR

This will generate a new .resource.dll based upon the Localize1.resource.dll but using our translated text (as specified in the file Localize1_fr-FR.csv). The new DLL will be written to the fr-FR folder.

Testing our translations

So now let’s see if everything worked by testing our translations in our application.

The easiest way to do this is to edit the App.xaml.cs and if it doesn’t have a constructor, then add one which should look like this

public App()
   CultureInfo ci = new CultureInfo("fr-FR");
   Thread.CurrentThread.CurrentCulture = ci;
   Thread.CurrentThread.CurrentUICulture = ci;

you’ll obviously requires the following using clauses as well

using System.Globalization;
using System.Threading;

We’re basically forcing our application to use fr-FR by default when it starts. If all went well, you should see the TextBlock with the text Bonjour.

Now change the Culture to one which you have not generated a set of resources for, i.e. in my case I support en-GB and fr-FR, so switching to en-US and running the application will have an undesirable affect, i.e. an IOException occurs, with additional information “Cannot locate resource ‘mainwindow.xaml'”. This is not very helpful, but basically means we do not have a “fallback” or “neutral language” resource.

Setting a fallback/neutral language resource

We, obviously don’t want to have to create resource files for every possible culture. What we need is to have a fallback or neutral language resource which is used when a culture is not supported via a translation DLL. To achieve this, open AssemblyInfo.cs and locate the commented out line which includes NeutralResourcesLanguage or just add the following either way

[assembly: NeutralResourcesLanguage("en-GB", UltimateResourceFallbackLocation.Satellite)]

obviously replace the eb-GB with your preferred default language. Run the application again and no IOException should occur and the default resources en-GB are used.

What about strings in code?

Well as the name suggests, locbaml is really localizing our BAML and when our WPF application starts up, in essence it loads our XAML with the one’s stored in the resource DLL.

So the string that we’ve embedded in the MainWindow.xaml, is not separated from the XAML (i.e. it’s embedded within the TextBlock itself). So we need to move our strings into a shared ResourceDictionary file and reference them from the UI XAML. For example in our App.xaml let’s add

   <system:String x:Uid="Header_1" x:Key="Header">TBC</system:String>

Now, change our MainWindow.xaml to

<TextBlock Text="{StaticResource Header}" />

This allows us to use FindResource to get at the string resource using the standard WPF FindResource method, as per

var headerString = Application.Current.FindResource("Header");

This appears to be the only “easy” way I’ve found of accessing resources and requires the resource key, not the Uid. This is obviously not great (if it is the only mechanism) as it then requires that we maintain both Uid and Key on each string, control etc. However if we ensure strings are stored as string resources then this probably isn’t too much of a headache.



Localizing a WPF application using dynamic resources

There are several options for localizating a WPF application. We can use resx files, locbaml to create resource DLL’s for us, or why not just use the same technique used in theme’s, i.e. DynamicResource and ResourceDictionary’s.

In this post I’m going to look at the DynamicResource and ResourceDictionary approach to localization. Although this technique can obviously be used with images etc., we’ll concentrate on dealing with strings, which usually are the main area of localization.

Let’s start with some code

Create a simple WPF application which will use the standard DynamicResource to set text on controls. We will create a “default” set of string resources to allow us to develop our initial application with and we will create two satellite assemblies which will contain the same string resources for en-GB and en-US resources.

  • Create a WPF Application
  • Create a class library named Resources_en-GB and another class library named Resources_en-US
  • Add the references PresnetationCore, PresentationFramework, WindowsBase and System.Xaml to these class libraries
  • Change the class library output folders for Debug and Release to match those for the WPF application so the DLL’s will be built to the same folder as the application

Now in the WPF application, add a ResourceDictionary, mine’s named Strings.xaml and this will act as our default/design-time dictionary, here’s mine

<ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"

    <system:String x:Key="Header">TBC</system:String>

and my MainWindow.xaml looks like this

<Window x:Class="WpfLocalizable.MainWindow"
        Title="MainWindow" Height="350" Width="525">
        <TextBlock Text="{DynamicResource Header}" FontSize="24"/>
        <StackPanel Orientation="Horizontal" VerticalAlignment="Bottom">
            <Button Content="US" Width="100" Margin="10" />
            <Button Content="GB" Width="100" Margin="10" />

I didn’t say this was going to be exciting, did I?

Now if you run the application as it currently stands, you’ll see the string TBC – our design-time string.

Next, copy the Strings.xaml file from the application to both the Resources_en-GB and Resources_en-US and change the string text to represent your GB and US strings for header – I used the word Colour in GB and Color in US – just to demonstrate the common language differences.

Now if you build and run the application, you’ll see the default header text still, so we now need to make the application set the resources at start-up and allow us to easily switch them. So change the buttons in MainWindow.xaml to these

<Button Content="US" Width="100" Margin="10" Click="US_OnClick"/>
<Button Content="GB" Width="100" Margin="10" Click="GB_OnClick"/>

We’re going to simply use code behind for changing the resource in this demo. So in MainWindow.xaml.cs add the following code

private void LoadStringResource(string locale)
   var resources = new ResourceDictionary();

   resources.Source = new Uri("pack://application:,,,/Resources_" + locale + ";component/Strings.xaml", UriKind.Absolute);

   var current = Application.Current.Resources.MergedDictionaries.FirstOrDefault(
                    m => m.Source.OriginalString.EndsWith("Strings.xaml"));

   if (current != null)


private void US_OnClick(object sender, RoutedEventArgs e)

private void GB_OnClick(object sender, RoutedEventArgs e)

and finally in the constructor let’s default to en-GB, so simply add this line after the InitializeComponent


Now run the application, be default you should see en-GB strings and then press the US button to see the en-US version etc.

Finishing touches

In some situations we might want to switch the languages strings used via an option (very useful when debugging but also in you’re natural language is not the same as the default on your machine). In most cases, we’re likely to want to switch the language at start-up to match the machines’s culture/language.

Using ResourceDictionary might look a little more complex than CSV files, but should be easy for your translators to use and being, ultimately, XML – we could ofcourse write a simple application to allow the translators to view strings etc. in a tabular format.

We can deploy as many or as few localized resources as we need on a machine.


Checkout this a useful document WPF Globalization and Localization Overview from Microsoft.

Getting started with Bond

What’s Bond?

The Microsoft github repos. for Bond states that “Bond is an open source, cross-platform framework for working with schematized data. It supports cross-language serialization/deserialization and powerful generic mechanisms for efficiently manipulating data.”

To put it another way, Bond appears to be similar to Google’s protocol buffers. See Why Bond? for more on what Bond is.

At the time of writing, out of the box, Bond is supported with C++. C# and Python language bindings.

Jumping straight in

Let’s jump straight in an write some code. Here’s the steps to create our project

  • For this example, let’s create a Console project in Visual Studio
  • Using NuGet add the Bond.CSharp package

Now we’ll define our schema using Bond’s IDL. This should be saved with the .bond extension, so in my case, this code is in Person.bond

namespace Sample

struct Person
    0: string FirstName;
    1: string LastName;
    2: int32 Age;

Notice that we create a namespace and then use a Bond struct to define our data. Within the struct, each item of data is preceded with an numeric id (see IDL Syntax for more information).

Now, we could obviously write the code to represent this IDL, but it’s be better still if we can generate the source code from the IDL. When we added the NuGet Bond.CSharp package we also got a copy of gbc, which is the command line tool for this purpose.

Open up a cmd prompt and locate gbc (mine was installed into \packages\Bond.CSharp.5.0.0\tools). From here we run the following

gbc c# <Project>\Person.bond -o=<Project>

Replace <Project> with the file path of your application and where your .bond file is located..

This command will generate the source files from the Person.bond IDL and output (the -o switch) to the root of the project location.

Now we need to include the generated files in our project – mine now includes Person_interfaces.cs, Person_proxies.cs, Person_services.cs and Person_types.cs. In fact we only need the Person_types.cs for this example. This includes the C# representation of our IDL and looks (basically) like this

public partial class Person
   public string FirstName { get; set; }

   public string LastName { get; set; }

   public int Age { get; set; }

   public Person()
      : this("Sample.Person", "Person")

   protected Person(string fullName, string name)
      FirstName = "";
      LastName = "";

Let’s now look at some code for writing a Person to the Bond serializer.

Note: There is an example of serialization code in the guide to Bond. This shows the helper static methods, Serializer.To and Deserializer.From, however these are not the most optimal for non-trivial code, so I’ll ignore those for my example.

Using clause

Bond includes a Bond.IO.Safe namespace and a Bond.IO.Unsafe, according to the documentation the Unsafe namespace includes the fastest code. So For this example I’m using Bond.IO.Unsafe.

How to write an object to Bond

var src = new Person
   FirstName = "Scooby",
   LastName = "Doo",
   Age = 7

var output = new OutputBuffer();
var writer = new CompactBinaryWriter<OutputBuffer>(output);
var serializer = new Serializer<CompactBinaryWriter<OutputBuffer>>(typeof(Person));

serializer.Serialize(src, writer);

The Serialize.To code allows us to dispense with the serializer, but the initial call to this creates the serializer which can take a performance hit if used inside a loop or the likes, hence creating the serializer upfront and using this instance in any loops would provide better overall performance.

How to read an object from Bond

var input = new InputBuffer(output.Data);
var reader = new CompactBinaryReader<InputBuffer>(input);
var deserializer = new Deserializer<CompactBinaryReader<InputBuffer>>(typeof(Person));

var dst = deserializer.Deserialize(reader);

In the above code we’re getting the input from the OutputBuffer we created from writing data, although this is just to demonstrate usage. The InputBuffer can take a byte[] representing the data to be deserialized.

Where possible InputBuffer’s and OutputBuffer’s should also be reused, simply set the buffer.Position = 0 to reset them after use.

Serialization Protocols

In the previous code we’ve used the CompactBinary classes which implements binary serialization (optimized for compactness, as the name suggests), but there are several other serialization protocols.

FastBinaryReader/FastBinaryWriter, classes are optimized for speed, and easily plug into our sample code like this

var writer = new FastBinaryWriter<OutputBuffer>(output);
var serializer = new Serializer<FastBinaryWriter<OutputBuffer>>(typeof(Person));


var reader = new FastBinaryReader<InputBuffer>(input);
var deserializer = new Deserializer<FastBinaryReader<InputBuffer>>(typeof(Person));

SimpleBinaryReader/SimpleBinaryWriter, classes offer potential for a saving on the payload size.

var writer = new SimpleBinaryWriter<OutputBuffer>(output);
var serializer = new Serializer<SimpleBinaryWriter<OutputBuffer>>(typeof(Person));


var reader = new SimpleBinaryReader<InputBuffer>(input);
var deserializer = new Deserializer<SimpleBinaryReader<InputBuffer>>(typeof(Person));

Human readable serialization protocols

At the time of writing, Bond supports the two “human-readable” based protocols, which are XML and JSON.

Let’s look at the changes required to read/write JSON.

The JSON protocol can be used with the .bond file as previously defined, or we can add JsonName attribute to the fields to produce

namespace Sample

struct Person
    0: string FirstName;
    1: string LastName;
    2: int32 Age;

if we are supporting Json with named attributes. The easiest way to use the SimpleJsonReade/SimpleJsonWriter is using a string buffer (or StringBuilder in C# terms), so here’s the code to write our Person object to a Json string

var sb = new StringBuilder();
var writer = new SimpleJsonWriter(new StringWriter(sb));
var serializer = new Serializer<SimpleJsonWriter>(typeof(Person));

serializer.Serialize(src, writer);

to deserialize the string back to an object we can use

var reader = new SimpleJsonReader(new StringReader(sb.ToString()));
var deserializer = new Deserializer<SimpleJsonReader>(typeof(Person));

var dst = deserializer.Deserialize(reader);

The XML protocol can be used with the original .bond file (or the Json one as the JsonName attributes are ignored) so nothing to change there. Here’s the code to write our object to XML (again we’re using a string as a buffer)

var sb = new StringBuilder();
var writer = new SimpleXmlWriter(XmlWriter.Create(sb));
var serializer = new Serializer<SimpleXmlWriter>(typeof(Person));

serializer.Serialize(src, writer);


and to deserialize the XML we simply use

var reader = new SimpleXmlReader(
         new StringReader(sb.ToString())));
var deserializer = new Deserializer<SimpleXmlReader>(typeof(Person));

var dst = deserializer.Deserialize(reader);


The Transcoder allows us to convert “payloads” from one protocol to another. For example, let’s assume we’ve got a SimpleXmlReader representing some XML data and we want to transcode it to a CompactBinaryWriter format, we can do the following

var reader = new SimpleXmlReader(XmlReader.Create(new StringReader(xml)));

var output = new OutputBuffer();
var writer = new CompactBinaryWriter<OutputBuffer>(output);

var transcode = new Transcoder<

transcode.Transcode(reader, writer);

Now our payload is represented as a CompactBinaryWriter. Obviously this is more useful in scenarios where you have readers and writers as opposed to this crude example where we could convert to and from the Person object ourselves.


A Young Person’s Guide to C# Bond

TestStack.White Gotcha/Tips

RadioButton Click might not actually change anything

The click method does not actually click on the radio button itself. It’s noticeable where a radio button fills some extra space, in some cases the click will not be over the radio button or the text and thus doesn’t seem to work.

Instead use

var radioButton = window.Get<RadioButton>(SearchCriteria.ByText("One"));


What type is a UserControl mapped to in TestStack.White?

WPF UserControl’s maps to the TestStack.White frameworks CustomUIItem. Hence

<!-- Other elements -->

can be accessed using

var myClassUserControl =

Defining a custom control mapping

When using the generic Get method in TestStack.White, you’re have the ability to convert the automation control to a TestStack.White Label, Button etc. to give the feel of interacting with such capabilities that are exposed by these types of controls.

In the case of a WPF UserControl we see this maps to a CustomUIItem. It might be useful if we were to define a TestStack.White compatible UserControl for use with the Get method (for example).

Let’s firstly look at how TestStack.White source code implements a Label (here’s the source for the Label control)

public class Label : UIItem
   protected Label() {}
   public Label(AutomationElement automationElement, 
       IActionListener actionListener) : 
          base(automationElement, actionListener) {}

   public virtual string Text
      get { return (string) Property(AutomationElement.NameProperty); }

Now in our case we need to create a similar class but derived from the CustomUIItem, so here’s ours

[ControlTypeMapping(CustomUIItemType.Custom, WindowsFramework.Wpf)]
public class UserControl : CustomUIItem
   public UserControl(
      AutomationElement automationElement, 
      ActionListener actionListener)
         : base(automationElement, actionListener)

   protected UserControl()

According to the Custom UI Items documentation, an Empty constructor is mandatory with protected or public access modifier also required.

The ControlTypeMapping attribute is used to allow TestStack.White to map the return from the Get method to the new UserControl type, for example

var userControl = window.Get<UserControl>(

Selecting an item in a ComboBox

The code for selecting an item in a ComboBox is fairly simple in TestStack.White, but when I used it I kept getting exceptions saying something about virtualization pattern.

Luckily as TestStack.White is built upon the MS Automation framework and others have been here before me, this from Stackoverflow worked for me, here’s the code slightly altered to use as an extension method

public static void SelectItem(this ComboBox control, string item)
   var listControl = control.AutomationElement;

   var automationPatternFromElement = 

   var expandCollapsePattern =
         as ExpandCollapsePattern;
   if(expandCollapsePattern != null)

      var listItem = listControl.FindFirst(
          new PropertyCondition(AutomationElement.NameProperty, item));

      automationPatternFromElement = 

      var selectionItemPattern =
            as SelectionItemPattern;

      if(selectionItemPattern != null)

private static AutomationPattern GetSpecifiedPattern(
   AutomationElement element, string patternName)
   return element.GetSupportedPatterns()
      .FirstOrDefault(pattern => 
         pattern.ProgrammaticName == patternName);

UI Automation Testing with TestStack.White

TestStack.White is based on the UI Automation libraries (see UI Automation), offering a simplification of such methods for automating a UI and allowing us to write unit tests against such UI automation.

Getting Started

Let’s jump straight in and write a simply UI automation unit test around the Calc.exe application.

  • Create a new C# Unit Test project (or class library, adding your favoured unit testing framework)
  • Install the TestStack.White nuget package

Let’s begin by creating a simple test method which starts the Calc.exe application, get’s access to the calculator window and then disposes of it, we’ll obviously insert code into this test to do something of value soon, but for now, here’s the basics

public void TestMethod1()
   using(var application = Application.Launch("Calc.exe"))
      var calculator = application.GetWindow("Calculator", InitializeOption.NoCache);

      // do something with the application


Well that doesn’t do anything too exciting, it runs Calc.exe and then closes it, but now we can start interacting with an instance of the calculator’s UI using TestStack.White.

Let’s start by getting the button with the number 7 and click/press it.

var b7 = calculator.Get<Button>(SearchCriteria.ByText("7"));

By using the Get method with the generic parameter Button, we get back a button object which we can interact directly with. The SearchCriteria allows us to try to find UI control in the Calculator with the text (in this case) 7. As is probably quite obvious, we call the Click method on this button object to simulate a button click event.

We can’t always get as controls by their text so using Spy++ and using the cross-hair/find window tool we can find the “Control ID” (which is in hex.) and we can instead find a control via this id (White calls this the automation id) hence

var plus = calculator.Get<Button>(

So let’s look at a completed and very simply unit test to see that we can add two numbers and the output (on the screen) is expected

var b7 = calculator.Get<Button>(

var plus = calculator.Get<Button>(

var b3 = calculator.Get<Button>(

var eq = calculator.Get<Button>(

var a = calculator.Get(

var r = a.Name;
Assert.AreEqual("12", r);

Managed applications

In the above example we uses Spy++ to get control id’s etc. for WPF we can use the utility, Snoop and for the automation id use the name of the control, for example

var searchBox = pf.Get<TextBox>(

where SearchBox is the name associated with the control.



Same XamDataGrid different layouts for different types

In some cases you might be using Infragistic’s XamDataGrid with differing types. For example, maybe a couple of types have the same base class but each have differing properties that you need the grid to display or maybe you have heterogeneous data which you want to display in the same grid.

To do this we simply define different field layouts within the XamDataGrid and use the Key property to define which layout is used for which type.

Let’s look at a simple example which will display two totally different sets of columns for the data. Here’s the example classes

public class Train
   public string Route { get; set; }
   public int Carriages { get; set; }

public class Car
   public string Make { get; set; }
   public string Model { get; set; }
   public float EngineSize { get; set; }

as you can see, the classes do not share a common base class or implement a common interface. If we set up our XamDataGrid like this

<ig:XamDataGrid DataSource="{Binding}">
      <ig:FieldLayout Key="Train">
         <ig:Field Name="Route" />
         <ig:Field Name="Carriages" />

      <ig:FieldLayout Key="Car">
         <ig:Field Name="Make" />
         <ig:Field Name="Model" />
         <ig:Field Name="EngineSize" />

we can then supply an IEnumerable (such as an ObservableCollection) with all the same type, i.e. Car or Train objects or a mixture of both.

The Key should have the name of the type which it’s field layout applies to. So for example, when Train objects are found in the DataSource, the Train FieldLayout is used hence the columns Route and Carriages will be displayed, likewise when Car objects are found the Car layout is used, thus Make, Model and EngineSize are displayed.

Note: The field layout is used for each row, i.e. the grid control doesn’t group all Trains together and/or all Cars, the rows are displayed in the order of the data and thus the field layouts are displayed each time the object type changes.

Dynamic Proxies with Castle.DynamicProxy

I’ve recently had to look at updating our very old version of The Castle.DynamicProxy to a more recent version and things have changed a little, so I thought it’d be a perfect excuse to write a little blog post about dynamic proxies and the Castle.DynamicProxy in particular.

What is a dynamic proxy?

Let’s begin with a simple definition – a proxy acts as an interception mechanism to a class (or interface), in a transparent way and can allow the developer to intercept calls to the original class and add or change functionality on the original class. For example, NHibernate uses them for lazy loading and mocking frameworks use them to intercept method/property calls.

Sounds great, what are the pitfuls?

The primary pitful of a dynamic proxy is that can added tot he overal memory footprint of your application if used too liberally. But if it supplies the functionality you require then this probably isn’t an issue, especially with 64-bit memory limits. Obviously they add an element of complexity which can become a pain to debug through, ofcourse there’s always trade offs.

Let’s see some code

We’re going to use the Castle.Core nuget package for this example, so create yourself a Console application, add this package to your references and then we’re good to go.

Proxies in remoting require you to derive your class from MarshalByRefObject, but this is not practical if you are unable to change the base class of your class. With Castle.DynamicProxy we can proxy our class without changing the base class, although we will need the class members to be virtual to use this code.

We’re going to create an interceptor which, as the name suggests will be used to intercept calls to our object by the dynamic proxy and in this case we’ll log to Console the method/property called.

public class Interceptor : IInterceptor
   public void Intercept(IInvocation invocation)
      Console.WriteLine($"Before target call {invocation.Method.Name}" );
      catch (Exception e)
         Console.WriteLine($"Target exception {ex.Message}");
         Console.WriteLine($"After target call {invocation.Method.Name}");

Now let’s create a simple class to use to demo this, it’ll have both a method and property to get a flavour for how these should look

public class MyClass
   public virtual bool Flag { get; set; }

   public virtual void Execute()
      Console.WriteLine("Execute method called");

Simple enough – notice we need to mark these property and method as virtual, also notice we’ve done nothing else to the class to show it’s going to be used in a proxy scenario.

Finally let’s see the code to proxy this class and change the property and run the method

var proxy = new ProxyGenerator()
       new Interceptor());
proxy.Flag = true;

That’s it. The output from running this in a Console will be

Before target call set_Flag
After target call set_Flag
Before target call Execute
Execute method called
After target call Execute

The Flag property setter is run, followed by the Execute method, both of which are intercepted.

We can also intercept interfaces (as you’d expect as dynamic proxies are used in mocking frameworks). However, your interceptor would need to mimic the functionality of an implementation of the interface. So for this example comment out the invocation.Proceed(); call in the interceptor.

Here’s a simple interface

public interface IPerson
   string FirstName { get; set; }
   string LastName { get; set; }

Now our code for executing our proxy against this interface would look like this

var proxy = new ProxyGenerator()
      new Interceptor());
proxy.FirstName = "Scooby";
proxy.LastName = "Doo";

The output will show the calls to the interface property setters. We can create a dynamic proxy to an interface but supply the underlying target by implementing the interface and supplying an instance to the proxy generator – so uncomment the invocation.Proceed(); line in the interceptor, implement the IPerson interface, such as

public class Person : IPerson
   public string FirstName { get; set; }
   public string LastName { get; set; }

and now our proxy generator code can be change to this

var proxy = (IPerson)new ProxyGenerator()
      new Person(),
      new Interceptor());
proxy.FirstName = "Scooby";
proxy.LastName = "Doo";

in this example, we’ve not made our implementation properties virtual, and the Person setters will be invoked via the interceptor.

In this case the proxy is based upon the interface and simply calls the “target” object properties/methods. Hence this forwarding of calls means the target object does not need to have methods/properties marked as virtual.

A gotcha here is that all calls to the target must go through the proxy to be intercepted, this means that if your target call’s a method on itself, this will not be intercepted. To see this in action, let’s assume our IPerson now has a method void Change() and the implementation of this sets the FirstName to some value. So it looks like this

public void Change()
   FirstName = "Scrappy";

Now if you call the proxy Change method, it will be intercepted and our logging will be displayed but when it proceeds with the Change method (above), the the call to the FirstName setter will not be intercepted as this is run on the target not the proxy – hopefully that makes sense.

Scientist in the making (aka using Science.NET)

When we’re dealing with refactoring legacy code, we’ll often try to ensure the existing unit tests (if they exist) or new ones cover as much of the code as possible before refactoring it. But there’s always a concern about turning off the old code completely until we’ve got a high confidence in the new code. Obviously the test coverage figures and unit tests themselves should give us that confidence, but wouldn’t it by nice to maybe we instead ran the old and new code in parallel and compare the behaviour or at least the results of the code? This is where the Scientist library comes in.

Note: This is very much (from my understanding) in an alpha/pre-release stage of development, so any code written here may differ from the way the library ends up working. So basically what I’m saying is this code works at the time of writing.

Getting started

So the elevator pitch for Science.NET is that it “allows us to two difference implementations of code, side by side and compare the results”. Let’s expand on that with an example.

First off, we’ll set-up our Visual Studio project.

  • Create a new console application (just because its simple to get started with)
  • From the Package Manager Console, execute Install-Package Scientist -Pre

Let’s start with a very simple example, let’s assume we have a method which returns a numeric value, we don’t really need to worry much about what this value means – but if you like a back story, let’s assume we import data into an application and the method calculates the confidence that the data matches a known import pattern.

So the legacy code, or the code we wish to verify/test against looks like this

public class Import
   public float CalculateConfidenceLevel()
       // do something clever and return a value
       return 0.9f;

Now our new Import class looks like this

public class NewImport
   public float CalculateConfidenceLevel()
      // do something clever and return a value
      return 0.4f;

Okay, okay, I know the result is wrong, but this is mean’t to demonstrate the Science.NET library not my Import code.

Right, so what we want to do is run the two versions of the code side-by-side and see whether the always give the same result. So we’re going to simply run these in our console’s Main method for now but ofcourse the idea is this code would be run from wherever you currently run the Import code from. For now just add the following to Main (we’ll discuss strategies for running the code briefly after this)

var import = new Import();
var newImport = new NewImport();

float confidence = 
      "Confidence Experiment", experiment =>
      experiment.Use(() => import.CalculateConfidenceLevel());
      experiment.Try(() => newImport.CalculateConfidenceLevel());

Now, if you run this console application you’ll see the confidence variable will have the value 0.9 in it as it’s used the .Use code as the result, but the Science method (surely this should be named the Experiment method :)) will actually run both of our methods and compare the results.

Obviously as both the existing and new implementations are run side-by-side, performance might be a concern for complex methods, especially if running like this in production. See the RunIf method for turning on/off individual experiments if this is a concern.

The “Confidence Experiment” string denotes the name of the comparison test and can be useful in reports, but if you ran this code you’ll have noticed everything just worked, i.e. no errors, no reports, nothing. That’s because at this point the default result publisher (which can be accessed via Scientist.ResultPublisher) is an InMemoryResultPublisher we need to implement a publisher to output to the console (or maybe to a logger or some other mechanism).

So let’s pretty much take the MyResultPublisher from Scientist.net but output to console, so we have

 public class ConsoleResultPublisher : IResultPublisher
   public Task Publish<T>(Result<T> result)
          $"Publishing results for experiment '{result.ExperimentName}'");
      Console.WriteLine($"Result: {(result.Matched ? "MATCH" : "MISMATCH")}");
      Console.WriteLine($"Control value: {result.Control.Value}");
      Console.WriteLine($"Control duration: {result.Control.Duration}");
      foreach (var observation in result.Candidates)
         Console.WriteLine($"Candidate name: {observation.Name}");
         Console.WriteLine($"Candidate value: {observation.Value}");
         Console.WriteLine($"Candidate duration: {observation.Duration}");

      if (result.Mismatched)
         // saved mismatched experiments to DB

      return Task.FromResult(0);

Now insert the following before the float confidence = line input our Main method

Scientist.ResultPublisher = new ConsoleResultPublisher();

Now when you run the code you’ll get the following output in the console window

Publishing results for experiment 'Confidence Experiment'
Control value: 0.9
Control duration: 00:00:00.0005241
Candidate name: candidate
Candidate value: 0.4
Candidate duration: 00:00:03.9699432

So now you’ll see where the string in the Science method can be used.


Checkout the documentation on Scientist.net of the source itself for more information.

Real world usage?

First off let’s revisit how we might actually design our code to use such a library. The example was created from scratch to demonstrate basic use of the library, but it’s more likely that we’d either create an abstraction layer which instantiates and executes the legacy and new code or if available add the new method to the legacy implementation code. So in an ideal worlds our Import and NewImport methods might implement an IImport interface. Thus it would be best to implement a new version of this interface and within the methods call the Science code, for example

public interface IImport
   float CalculateConfidenceLevel();

public class ImportExperiment : IImport
   private readonly IImport import = new Import();
   private readonly IImport newImport = new Import();

   public float CalculateConfidenceLevel()
      return Scientist.Science<float>(
         "Condfidence Experiment", experiment =>
            experiment.Use(() => import.CalculateConfidenceLevel());
            experiment.Try(() => newImport.CalculateConfidenceLevel());

I’ll leave the reader to put the : IImport after the Import and NewImport classes.

So now our Main method would have the following

Scientist.ResultPublisher = new ConsoleResultPublisher();

var import = new ImportExperiment();
var result = import.CalculateConfidenceLevel();

Using an interface like this now means it’s both easy to switch from the old Import to the experiment implementation and eventually to the new implementation, but then hopefully this is how we always code. I know those years of COM development make interfaces almost the first thing I write along with my love of IoC.

And more…

Comparison replacement

So the simple example above demonstrates the return of a primitive/standard type, but what if the return is one of our own more complex objects and therefore more complex comparisons? We can implement an

experiment.Compare((a, b) => a.Name == b.Name);

ofcourse we could hand this comparison off to a more complex predicate.

Unfortunately the Science method expects a return type and hence if your aim is to run two methods with a void return and maybe test some encapsulated data from the classes within the experiment, then you’ll have to do a lot more work.

Toggle on or off

The IExperiment interface which we used to call .Use and .Try also has the method RunIf which I mentioned briefly earlier. We might wish to write our code in such a way that the dev environment runs the experiments but production does not, ensuring our end user’s do not suffer performances hits due to the experiment running. We can use RunIf in the following manner

experiment.RunIf(() => !environment.IsProduction);

for example.

If we needed to include this line in every experiment it might be quite painful, so it’s actually more likely we’d use this to block/run specific experiments, so maybe we run all experiments in all environment, except one very slow experiment.

To enable/disable all experiments, instead we can use

Scientist.Enabled(() => !environment.IsProduction);

Note: this method is not in the NuGet package I’m using but is in the current source on GitHub and in the documentation so hopefully it works as expected in a subsequent release of the NuGet package.

Running something before an experiment

We might need to run something before an experiment starts but we want the code within the context of the experiment, a little like a test setup method, we can use

experiment.BeforeRun(() => BeforeExperiment());

in the above we’ll run some method BeforeExperiment() before the experiment continues.


I’ve not covered all the currently available methods here as the Scientist.net repository already does that, but hopefully I’m given a peek into what you might do with this library.

NPOI saves the day


NPOI is a port of POI for .NET. You know how we in the .NET side like to prefix with N or in the case of JUnit, change J to N for our versions of Java libraries.

NPOI allows us to write Excel files without Excel needing to be installed. By writing files directly it also gives us, speed, less likelihood or us leaving a Excel COM/Automation object in memory and basically a far nicer API.

So how did NPOI save the day?

I am moving an application to WPF and in doing so the third party controls also moved from WinForms to WPF versions. One, a grid control, used to have a great export to Excel feature which output the data in a specific way, unfortunately the WPF version did not write the Excel file in the same format. I was therefore tasked with re-implementing the Excel exporting code. I began with Excel automation which seemed slow and I found it difficult getting the output as we wanted. I then tried a couple of Excel libraries for writing the BIFF format (as used by Excel). Unfortunately these didn’t fully work and/or didn’t do what I needed. Then one of my Java colleagues mentioned POI and checked for an N version of POI, and there it was NPOI. NPOI did everything we needed, thus saving the day.

Let’s see some code

Okay usual prerequisites are

  • Create a project or whichever type you like
  • Using NuGet add the NPOI package

Easy enough.

Logically enough, we have workbooks at the top level with worksheet’s within a workbook. Within the worksheet we have rows and finally cells within the rows, all pretty obvious.

Let’s take a look at some very basic code

var workbook = new XSSFWorkbook();
var worksheet = workbook.CreateSheet("Sheet1");

var row = worksheet.CreateRow(0);
var cell = row.CreateCell(0);

cell.SetCellValue("Hello Excel");

using (var stream = new FileStream("test.xlsx", FileMode.Create, FileAccess.Write))


The above should be pretty self explanatory, after creating the workbook etc. we write the workbook to a file and then using Process, we get Excel to display ht file we’ve created.

Autosizing columns

By default you might feel the columns are too thin, we can therefore iterate over the columns after setting our data and run

for (var c = 0; c < worksheet.GetRow(0).Cells.Count; c++)

The above code is simply looping over the columns (I’ve assumed row 0 holds headings for each column – as it were#) and telling the worksheet o auto-size them.

Grouping rows

One thing we have in our data is a need to show parent child relationships in the Excel spreadsheet. Excel allows us to do this by “grouping” rows. For example, if we have


We’d like to show this in Excel in collapsible rows, like a treeview. As such we want the child curves to be within the group so we’d see something like this


or expanded


to achieve this in NPOI (assuming Parent is row 0) we would group row’s 1 and 2, i.e.

worksheet.GroupRow(1, 2);
//if we want to default the rows to collapsed use
worksheet.SetRowGroupCollapsed(1, true);

finally for grouping, the +/- button by default displays at the bottom of the grouping which I always found a little strange, so to have this display at the top of the group we set this via

worksheet.RowSumsBelow = false;

Date format

You may wish to customise the way DateTime’s are displayed, in which case we need to apply a style to the cell object, for example, let’s display the DateTime in the format dd mm yy hh:mm

var creationHelper = workbook.GetCreationHelper();

var cellStyle = workbook.CreateCellStyle();
cellStyle.DataFormat = creationHelper
   .GetFormat("dd mmm yy hh:mm");
cellStyle.Alignment = HorizontalAlignment.Left;

// to apply to our cell we use
cell.CellStyle = cellStyle;