Category Archives: Programming

Fun with Expression objects

Expressions are very useful in many situation. For example, you have a method which you pass a lambda expression to and the method can then determine the name of the property the lambda is using or the method name etc. Or maybe you’ve used them in the past in view model implementations pre-nameof. They’re also used a lot in mocking frameworks, to make the arrange steps nice and terse.

For example

Expressions.NameOf(x => x.Property);

In the above example the lambda expression is the x => x.Property and the NameOf method determines the name of the property, i.e. it returns the string “Property” in this case.

Note: This example is a replacement for the nameof operator for pre C# 6 code bases.

Let’s look at some implementations of such expression code.

InstanceOf

So you have code in the format

Expressions.InstanceOf(() => s.Clone());

In this example Expressions is a static class and InstanceOf should extract the instance s from the expression s.Clone(). Obviously we might also need to use the same code of a property, i.e. find the instance of the object with the property.

public static object InstanceOf<TRet>(Expression<Func<TRet>> funcExpression)
{
   var method = funcExpression.Body as MethodCallExpression;
   // if not a methood, try to get the body (possibly it's a property)
   var memberExpression = method == null ? 
      funcExpression.Body as MemberExpression : 
      method.Object as MemberExpression;

   if(memberExpression == null)
      throw new Exception("Unable to determine expression type, expected () => vm.Property or () => vm.DoSomething()");

   // find top level member expression
   while (memberExpression.Expression is MemberExpression)
      memberExpression = (MemberExpression)memberExpression.Expression;

   var constantExpression = memberExpression.Expression as ConstantExpression;
   if (constantExpression == null)
      throw new Exception("Cannot determine constant expression");

   var fieldInfo = memberExpression.Member as FieldInfo;
   if (fieldInfo == null)
      throw new Exception("Cannot determine fieldinfo object");

   return fieldInfo.GetValue(constantExpression.Value);
}

NameOf

As this was mentioned earlier, let’s look at the code for a simple nameof replacement using Expressions. This implementation only works with properties

public static string NameOf<T>(Expression<Func<T>> propertyExpression) 
{
   if (propertyExpression == null)
      throw new ArgumentNullException("propertyExpression");

   MemberExpression property = null;

   // it's possible to end up with conversions, as the expressions are trying 
   // to convert to the same underlying type
   if (propertyExpression.Body.NodeType == ExpressionType.Convert)
   {
      var convert = propertyExpression.Body as UnaryExpression;
      if (convert != null)
      {
         property = convert.Operand as MemberExpression;
      }
   }

   if (property == null)
   {
      property = propertyExpression.Body as MemberExpression;
   }
   if (property == null)
      throw new Exception(
         "propertyExpression cannot be null and should be passed in the format x => x.PropertyName");

   return property.Member.Name;
}

NUnit’s TestCaseSourceAttribue

When we use the TestCaseAttribute with NUnit tests, we can define the parameters to be passed to a unit test, for example

[TestCase(1, 2, 3)]
[TestCase(6, 3, 9)]
public void Sum_EnsureValuesAddCorrectly1(double a, double b, double result)
{
   Assert.AreEqual(result, a + b);
}

Note: In a previous release the TestCaseAttribute also had Result property, but this doesn’t seem to be the case now, so we’ll expect the result in the parameter list.

This is great, but what if we want our data to come from a dynamic source. We obviously cannot do this with the attributes, but we could using the TestCaseSourceAttribute.

In it’s simplest form we could rewrite the above test like this

[Test, TestCaseSource(nameof(Parameters))]
public void Sum_EnsureValuesAddCorrectly(double a, double b, double result)
{
   Assert.AreEqual(result, a + b);
}

private static double[][] Parameters =
{
   new double[] { 1, 2, 3 },
   new double[] { 6, 3, 9 }
};

an alternative to the above is to return TestCaseData object, as follows

[Test, TestCaseSource(nameof(TestData))]
public double Sum_EnsureValuesAddCorrectly(double a, double b)
{
   return a + b;
}

private static TestCaseData[] TestData =
{
   new TestCaseData(1, 2).Returns(3),
   new TestCaseData(6, 3).Returns(9)
};

Note: In both cases, the TestCaseSourceAttribute expects a static property or method to supply the data for our test.

The property which returns the array (above) doesn’t need to be in the Test class, we could have a separate class, such as

[Test, TestCaseSource(typeof(TestDataClass), nameof(TestData))]
public double Sum_EnsureValuesAddCorrectly(double a, double b)
{
   return a + b;
}

class TestDataClass
{
   public static IEnumerable TestData
   {
      get
      {
         yield return new TestCaseData(1, 2).Returns(3);
         yield return new TestCaseData(6, 3).Returns(9);
      }
   }
}

Extended or unit test capabilities using the TestCaseSource

If we take a look at NBench Performance Testing – NUnit and ReSharper Integration we can see how to extend our test capabilities using NUnit to run our extensions. i.e. with NBench we want to create unit tests to run performance tests within the same NUnit set of tests (or separately but via the same test runners).

I’m going to recreate a similar set of features for a more simplistic performance test.

Note: this code is solely to show how we can create a similar piece of testing functionality, it’s not mean’t to be compared to NBench in any way, plus NUnit also has a MaxTimeAttribute which would be sufficient for most timing/performance tests.

Let’s start by creating an attribute which will use to detect methods which should be performance tested. Here’s the code for the attribute

[AttributeUsage(AttributeTargets.Method)]
public class PerformanceAttribute : Attribute
{
   public PerformanceAttribute(int max)
   {
      Max = max;
   }

   public int Max { get; set; }
}

The Max property defines a max time (in ms) that a test method should take. If it takes longer than the Max value, we expect a failing test.

Let’s quickly create some tests to show how this might be used

public class TestPerf : PerformanceTestRunner<TestPerf>
{
   [Performance(100)]
   public void Fast_ShouldPass()
   {
      // simulate a 50ms method call
      Task.Delay(50).Wait();
   }

   [Performance(100)]
   public void Slow_ShouldFail()
   {
      // simulate a slow 10000ms method call
      Task.Delay(10000).Wait();
   }
}

Notice we’re not actually marking the class as a TestFixture or the methods as Tests, as the base class PerformanceTestRunner will create the TestCaseData for us and therefore the test methods (as such).

So let’s look at that base class

public abstract class PerformanceTestRunner<T>
{
   [TestCaseSource(nameof(Run))]
   public void TestRunner(MethodInfo method, int max)
   {
      var sw = new Stopwatch();
      sw.Start();
      method.Invoke(this, 
         BindingFlags.Instance | BindingFlags.InvokeMethod, 
         null, 
         null, 
         null);
      sw.Stop();

      Assert.True(
         sw.ElapsedMilliseconds <= max, 
         method.Name + " took " + sw.ElapsedMilliseconds
      );
   }

   public static IEnumerable Run()
   {
      var methods = typeof(T)
         .GetMethods(BindingFlags.Public | BindingFlags.Instance);
      
      foreach (var m in methods)
      {
         var a = (PerformanceAttribute)m.GetCustomAttribute(typeof(PerformanceAttribute));
         if (a != null)
         {
            yield return 
               new TestCaseData(m, a.Max)
                     .SetName(m.Name);
         }
      }
   }
}

Note: We’re using a method Run to supply TestCaseData. This must be public as it needs to be accessible to NUnit. Also we use SetName on the TestCaseData passing the method’s name, hence we’ll see the method as the test name, not the TestRunner method which actually runs the test.

This is a quick and dirty example, which basically locates each method with a PerformanceAttribute and yields this to allow the TestRunner method to run the test method. It simply uses a stopwatch to check how long the test method took to run and compares with the setting for Max in the PerformanceAttribute. If the time to run the test method is less than or equal to Max, then the test passed, otherwise it fails with a message.

When run via a test runner you should see a node in the tree view showing TestPerf, with a child of PerformanceTestRunner.TestRunner, then child nodes below this for each TestCaseData ran against the TestRunner, we’ll see the method names Fast_ShouldPass and Slow_ShouldFail – and that’s it, we’ve reused NUnit, the NUnit runners (such as ReSharper etc.) and created a new testing capability, the Performance test.

Auto generating test data with Bogus

I’ve visited this topic (briefly) in the past using NBuilder and AutoFixture.

When writing unit tests it would be useful if we can create objects quickly and with random or better still semi-random data.

When I say semi-random, what I mean is that we might have some type with an Id property and we know this Id can only have a certain range of values, so we want a value generated for this property within that range, or maybe we would like to have a CreatedDate property with some data that resembles n years in the past, as opposed to just random date.

This is where libraries such as Faker.Net and Bogus come in – they allow us to generate objects and data, which meets certain criteria and also includes the ability to generate data which “looks” like real data. For example, first names, jobs, addresses etc.

Let’s look at an example – first we’ll see what the “model” looks like, i.e. the object we want to generate test data for

public class MyModel
{
   public string Name { get; set; }
   public long Id { get; set; }
   public long Version { get; set; }
   public Date Created { get; set; }
}

public struct Date
{
   public int Day { get; set; }
   public int Month { get; set; }
   public int Year { get; set; }
}

The date struct was included because this mirrors a similar type of object I get from some web services and because it obviously requires certain constraints, hence seemed a good example of writing such code.

Now let’s assume that we want to create a MyModel object. Using Bogus we can create a Facker object and apply constraints or assign random data to. Here’s an example implementation

var modelFaker = new Faker<MyModel>()
   .RuleFor(o => o.Name, f => f.Name.FirstName())
   .RuleFor(o => o.Id, f => f.Random.Number(100, 200))
   .RuleFor(o => o.Version, f => f.Random.Number(300, 400))
   .RuleFor(o => o.Created, f =>
   {
      var date = f.Date.Past();
      return new Date { Day = date.Day, Month = date.Month, Year = date.Year };
   });

var myModel = modelFaker.Generate();

Initially we create the equivalent of a builder class, in this case the Faker. Whilst we can generate a MyModel without all the rules being set, the rules allow us to customize what’s generated to something more meaningful for our use – especially when it comes to the Date type. So in order, the rules state that the Name property on MyModel should resemble a FirstName, the Id is set to a random value within the range 100-200, likewise the Version is constrained to the range 300-400 and finally the Created property is set by generating a past date and assigning the day, month and year to our Date struct.

Finally we Generate an instance of a MyModel object via the Faker builder class. An example of the values supplied by the Faker object are shown below

Created - Day = 12, Month = 7, Year = 2016
Id - 116
Name - Gwen
Version - 312

Obviously this code only works for classes with a default constructor. So what do we do if there’s no default constructor?

Let’s add the following to the MyModel class

public MyModel(long id, long version)
{
   Id = id;
   Version = version;
}

Now we simply change our Faker to look like this

var modelFaker = new Faker<MyModel>()
   .CustomInstantiator(f => 
      new MyModel(
         f.Random.Number(100, 200), 
         f.Random.Number(300, 400)))
   .RuleFor(o => o.Name, f => f.Name.FirstName())
   .RuleFor(o => o.Create, f =>
   {
      var date = f.Date.Past();
      return new Date { Day = date.Day, Month = date.Month, Year = date.Year };
   });

What if you don’t want to create a the whole MyModel object via Faker, but instead, you just want to generate a valid looking first name for the Name property? Or what if you are already using something like NBuilder but want to just use the Faker data generation code?

This can easily be achieved by using the non-generic Faker. Create an instance of it and you’ve got access to the same data, so for example

var f = new Faker();

myModel.Name = f.Name.FirstName();

References

Bogus for .NET/C#

.NET Core

This should be a pretty short post, just to outline using .NET core on Linux/Ubuntu server via Docker.

We could look to install .NET core via apt-get, but I’ve found it so much simpler running up a Docker container with .NET core already implemented in, so let’s do that.

First up, we run

docker run -it microsoft/dotnet:latest

This is an official Microsoft owned container. The use of :latest means we should get the latest version each time we run this command. The -it switch switches us straight into the Docker instance when it’s started, I/e/ into bash.

Now this is great if you’re happy to lose any code within the Docker image when it’s removed, but if you want to link into your hosts file system it’s better to run

docker run -it -v /home/putridparrot/dev:/development microsoft/dotnet:latest

Where /home/putridparrot/dev on the left of the colon, is a folder on your host/server filesystem and maps to a folder inside the Docker instance which will be named development (i.e. it acts like a link/shortcut).

Now when you are within the Docker instance you can save files to the host (and vice/versa) and they’ll persist beyond the life of the Docker instance and also allow us a simply means of copying files from, say a Windows machine into the instance of dotnet on the Linux server.

And that literally is that.

But let’s write some code and run it to prove everything is working.

To be honest, you should go and look at Install for Windows (or one of the installs for Linux or Mac) as I’m pretty much going to recreate the documentation on running dotnet from these pages

To run the .NET core framework, this includes compiling code, we use the command

dotnet

Before we get going, .NET core is very much a preview/RC release, so this is the version I’m currently using (there’s no guarantee this will work the same way in a production release), running

dotnet --version

we get version 1.0.0-preview2-003131.

Let’s create the standard first application

Now, navigate to home and then make a directory for some source code

cd /home
mkdir HelloWorld
cd /HelloWorld

yes, we’re going to write the usual, Hello World application, but that’s simply because the dotnet command has a built in “new project” which generates this for us. So run

dotnet new

This creates two files, Program.cs looks like this

using System;

namespace ConsoleApplication
{
    public class Program
    {
        public static void Main(string[] args)
        {
            Console.WriteLine("Hello World!");
        }
    }
}

nothing particularly interesting here, i.e. its a standard Hello World implementation.

However a second file is created (which is a little more interesting), the project.json, which looks like this

{
  "version": "1.0.0-*",
  "buildOptions": {
    "debugType": "portable",
    "emitEntryPoint": true
  },
  "dependencies": {},
  "frameworks": {
    "netcoreapp1.0": {
      "dependencies": {
        "Microsoft.NETCore.App": {
          "type": "platform",
          "version": "1.0.1"
        }
      },
      "imports": "dnxcore50"
    }
  }
}

Now, we need to run the following from the same folder as the project (as it will use the project.json file)

dotnet restore

this will restore any packages required for the project to build. To build and run the program we use

dotnet run

What are the dotnet cmd options

Obviously you can run dotnet –help and find these out yourself, but just to give a quick overview, this is what you’ll see as a list of commands

Common Commands:
  new           Initialize a basic .NET project
  restore       Restore dependencies specified in the .NET project
  build         Builds a .NET project
  publish       Publishes a .NET project for deployment (including the runtime)
  run           Compiles and immediately executes a .NET project
  test          Runs unit tests using the test runner specified in the project
  pack          Creates a NuGet package

.NET Core in Visual Studio 2015

Visual Studio (2015) offers three .NET core specific project templates, Class Library, Console application and ASP.NET Core.

References

https://docs.microsoft.com/en-us/dotnet/articles/core/tutorials/using-with-xplat-cli
https://docs.asp.net/en/latest/getting-started.html
https://docs.microsoft.com/en-us/dotnet/articles/core/tools/project-json

A quick look at WPF Localization Extensions

Continuing my look into localizing WPF application…

I’ve covered techniques used by theme’s to change a WPF application’s resources for the current culture. I’ve looked at using locbaml for extracting resources and localizing them. Now I’m looking at WPF Localization Extensions.

The WPF localization extension take the route of good old resx files and markup extensions (along with other classes) to implement localization of WPF applications.

Getting Started

  • Create a WPF application
  • Using NuGet, install the WPFLocalizeExtension
  • Open the Properties section and copy Resources.resx and rename the copy Resources.fr-FR.resx (or whatever culture you wish to support)

As with my other examples of localizing WPF applications, I’m not going to put too much effort into developing a UI as it’s the concepts I’m more interested in at this time.

First off let’s add the following XAML to MainWindow.xaml within the Window namspaces etc.

xmlns:lex="http://wpflocalizeextension.codeplex.com"
lex:LocalizeDictionary.DesignCulture="en"
lex:LocalizeDictionary.OutputMissingKeys="True"
lex:ResxLocalizationProvider.DefaultAssembly="Localize2"
lex:ResxLocalizationProvider.DefaultDictionary="Resources"        

The DefaultDictionary needs to have the name of the resource file, whether it’s Resources (exluding the .resx) or if you’ve created one names Strings or whatever, just exclude the extension.

The DefaultAssembly is the name of the assembly to be used as the default for resources, i.e. in this case it’s the name of my project’s assembly.

Next up, within the Grid, we’re going to have this

<TextBlock Text="{lex:Loc Header}" />

Header is a key – obviously on a production ready application with more than one string to translate, we’d probably implement a better naming convention.

A tell tale sign things are wrong is you’ll see the text displayed as Key: Header, this likely points to one of the namespace values being incorrect, such as the DefaultDictionary name.

That’s it for the UI.

In the Resources.resx, add a string named Header and give it a Value, for example mine should default to English and hence will have Hello as the value. In the Resources.fr-FR.resx, add a Header name and give it the value Bonjour.

That’s the extent of our strings for this application. If you run the application you should see the default Resources string, “Hello”. So now let’s look at testing the fr-FR culture.

In App.xaml.cs create a default constructor and place this code within it

LocalizeDictionary.Instance.SetCurrentThreadCulture = true;
LocalizeDictionary.Instance.Culture = new CultureInfo("fr-FR");

Run the application again and the displayed string will be take from the fr-FR resource file.

To allow us to easily switch, at runtime, between cultures, we can use the LocalizeDictionary. Here’s a combo box selector to do this (taken from the sample source on the WPF Localization Extensions github page).

<ComboBox ItemsSource="{Binding Source={x:Static lex:LocalizeDictionary.Instance}, Path=MergedAvailableCultures}"
   SelectedItem="{Binding Source={x:Static lex:LocalizeDictionary.Instance}, Path=Culture}" DisplayMemberPath="NativeName" />

We also need to be able get strings from the selected resource in our code, here’s a simple static class from StackOverflow which allows us to get a string (or other type) from the currently selected resources

public static class LocalizationProvider
{
    public static T GetLocalizedValue<T>(string key)
    {
        return LocExtension.GetLocalizedValue<T>
(Assembly.GetCallingAssembly().GetName().Name + ":Resources:" + key);
    }
}

The string “Resources” should obviously be changed to the name of your resource files (for example if you’re using just strings in a “Strings” resource etc).

This is certainly simpler to set-up than locbaml, the obvious drawback with this approach is that the strings at design time are not very useful. But if, like me, you tend to code WPF UI primarily in XAML then this probably won’t concern you.

Localizing a WPF application using locbaml

This post is going to mirror the Microsoft post How to: Localize an Application but I’ll try to add something of value to it.

Getting Started

Let’s create a WPF Application, mine’s called Localize1. In the MainWindow add one or more controls – I’m going basic at this point with the following XAML within the Window element of MainWindow.xaml

<Grid>
   <TextBlock>Hello</TextBlock>
</Grid>

According to the Microsoft “How To”, we now place the following line in the csproj

<UICulture>en-US</UICulture>

So locate the end of the <PropertyGroup> elements and put the following

<PropertyGroup>
    <UICulture>en-GB</UICulture>
</PropertyGroup>

See AssemblyInfo.cs comment on using the UICulture element also

Obviously put the culture specific to your default locale, hence mine’s en-GB. Save the altered csproj and ofcourse reload in Visual Studio if you have the project loaded.

The inclusion of the UICulture will result (when the application is built) in a folder en-GB (in my case) with a single satellite assembly created, named Localize1.resources.dll.

Next we’re going to use msbuild to generate Uid’s for our controls. So from the command line in the project folder of your application run

msbuild /t:updateuid Localize1.csproj

Obviously replace the project name with your own. This should generate Uid’s for controls within your XAML files. They’re not very descriptive, i.e. Grid_1, TextBlock_1 etc. but we’ll stick with following the “How To” for now. Ofcourse you can implement your own Uid’s and either use msbuild /t:updateuid to generate any missing Uid’s or ignore them – and have Uid’s for those controls you wish to localize only.

We can also verify that Uid’s exist for our controls by running

msbuild /t:checkuid Localize1.csproj

At this point we’ve generated Uid’s for our controls and msbuild generated as part of the compilation a resource DLL for the culture we assigned to the project file.

We now need to look at generating alternate language resource.

How to create an alternate language resource

We need to download the LocBaml tool or it’s source. I had problems locating this but luckily found source on github here.

So if you don’t have LocBaml already, download and build the source and drop the locbaml.exe into your bin\debug folder. Now run the following command

locbaml.exe /parse en-GB/Localize1.resources.dll /out:Localize1.csv

You could ofcourse copy the locbaml.exe to the en-GB folder in my example if you prefer. What we’re after is for locbaml to generate our Localize1.csv file, which will be then add translated text to.

Here’s what my csv file looks like

Localize1.g.en-GB.resources:mainwindow.baml,Window_1:Localize1.MainWindow.$Content,None,True,True,,#Grid_1;
Localize1.g.en-GB.resources:mainwindow.baml,Window_1:System.Windows.Window.Title,Title,True,True,,MainWindow
Localize1.g.en-GB.resources:mainwindow.baml,Window_1:System.Windows.FrameworkElement.Height,None,False,True,,350
Localize1.g.en-GB.resources:mainwindow.baml,Window_1:System.Windows.FrameworkElement.Width,None,False,True,,525
Localize1.g.en-GB.resources:mainwindow.baml,TextBlock_1:System.Windows.Controls.TextBlock.$Content,Text,True,True,,Hello

If you view this csv in Excel you’ll see 7 columns. These are in the following order (decriptions copied from the How To document)

  • BAML Name. The name of the BAML resource with respect to the source language satellite assembly.
  • Resource Key. The localized resource identifier.
  • Category. The value type.
  • Readability. Whether the value can be read by a localizer.
  • Modifiability. Whether the value can be modified by a localizer.
  • Comments. Additional description of the value to help determine how a value is localized.
  • Value. The text value to translate to the desired culture.

For our translators (if we’re using an external translator to localize our applications) we might wish to supply comments regarding expectations or context for the item to be localized.

So, go ahead and translate the string Hello to an alternate culture, I’m going to change it to Bonjour. Once completed, save the csv as Localize1_fr-FR.csv (or at least in my case translating to French).

Now we want to get locbaml to generate our new satellite assembly for the French language resources, so again from the Debug folder (where you should have the generated csv from the original set of resources as well as the new fr-FR file) create a folder named fr-FR (or whatever your new culture is).

Run the locbaml command

locbaml.exe /generate en-GB/Localize1.resources.dll /trans:Localize1_fr-FR.csv /out:fr-FR /cul:fr-FR

This will generate a new .resource.dll based upon the Localize1.resource.dll but using our translated text (as specified in the file Localize1_fr-FR.csv). The new DLL will be written to the fr-FR folder.

Testing our translations

So now let’s see if everything worked by testing our translations in our application.

The easiest way to do this is to edit the App.xaml.cs and if it doesn’t have a constructor, then add one which should look like this

public App()
{
   CultureInfo ci = new CultureInfo("fr-FR");
   Thread.CurrentThread.CurrentCulture = ci;
   Thread.CurrentThread.CurrentUICulture = ci;
}

you’ll obviously requires the following using clauses as well

using System.Globalization;
using System.Threading;

We’re basically forcing our application to use fr-FR by default when it starts. If all went well, you should see the TextBlock with the text Bonjour.

Now change the Culture to one which you have not generated a set of resources for, i.e. in my case I support en-GB and fr-FR, so switching to en-US and running the application will have an undesirable affect, i.e. an IOException occurs, with additional information “Cannot locate resource ‘mainwindow.xaml'”. This is not very helpful, but basically means we do not have a “fallback” or “neutral language” resource.

Setting a fallback/neutral language resource

We, obviously don’t want to have to create resource files for every possible culture. What we need is to have a fallback or neutral language resource which is used when a culture is not supported via a translation DLL. To achieve this, open AssemblyInfo.cs and locate the commented out line which includes NeutralResourcesLanguage or just add the following either way

[assembly: NeutralResourcesLanguage("en-GB", UltimateResourceFallbackLocation.Satellite)]

obviously replace the eb-GB with your preferred default language. Run the application again and no IOException should occur and the default resources en-GB are used.

What about strings in code?

Well as the name suggests, locbaml is really localizing our BAML and when our WPF application starts up, in essence it loads our XAML with the one’s stored in the resource DLL.

So the string that we’ve embedded in the MainWindow.xaml, is not separated from the XAML (i.e. it’s embedded within the TextBlock itself). So we need to move our strings into a shared ResourceDictionary file and reference them from the UI XAML. For example in our App.xaml let’s add

<ResourceDictionary>
   <system:String x:Uid="Header_1" x:Key="Header">TBC</system:String>
</ResourceDictionary>

Now, change our MainWindow.xaml to

<TextBlock Text="{StaticResource Header}" />

This allows us to use FindResource to get at the string resource using the standard WPF FindResource method, as per

var headerString = Application.Current.FindResource("Header");

This appears to be the only “easy” way I’ve found of accessing resources and requires the resource key, not the Uid. This is obviously not great (if it is the only mechanism) as it then requires that we maintain both Uid and Key on each string, control etc. However if we ensure strings are stored as string resources then this probably isn’t too much of a headache.

References

https://msdn.microsoft.com/en-us/library/ms745650.aspx
https://msdn.microsoft.com/en-us/library/ms788718(v=vs.110).aspx

Localizing a WPF application using dynamic resources

There are several options for localizating a WPF application. We can use resx files, locbaml to create resource DLL’s for us, or why not just use the same technique used in theme’s, i.e. DynamicResource and ResourceDictionary’s.

In this post I’m going to look at the DynamicResource and ResourceDictionary approach to localization. Although this technique can obviously be used with images etc., we’ll concentrate on dealing with strings, which usually are the main area of localization.

Let’s start with some code

Create a simple WPF application which will use the standard DynamicResource to set text on controls. We will create a “default” set of string resources to allow us to develop our initial application with and we will create two satellite assemblies which will contain the same string resources for en-GB and en-US resources.

  • Create a WPF Application
  • Create a class library named Resources_en-GB and another class library named Resources_en-US
  • Add the references PresnetationCore, PresentationFramework, WindowsBase and System.Xaml to these class libraries
  • Change the class library output folders for Debug and Release to match those for the WPF application so the DLL’s will be built to the same folder as the application

Now in the WPF application, add a ResourceDictionary, mine’s named Strings.xaml and this will act as our default/design-time dictionary, here’s mine

<ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
                    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
                    xmlns:system="clr-namespace:System;assembly=mscorlib">

    <system:String x:Key="Header">TBC</system:String>
    
</ResourceDictionary>

and my MainWindow.xaml looks like this

<Window x:Class="WpfLocalizable.MainWindow"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
        xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
        xmlns:local="clr-namespace:WpfLocalizable"
        mc:Ignorable="d"
        Title="MainWindow" Height="350" Width="525">
    <StackPanel>
        <TextBlock Text="{DynamicResource Header}" FontSize="24"/>
        <StackPanel Orientation="Horizontal" VerticalAlignment="Bottom">
            <Button Content="US" Width="100" Margin="10" />
            <Button Content="GB" Width="100" Margin="10" />
        </StackPanel>
    </StackPanel>
</Window>

I didn’t say this was going to be exciting, did I?

Now if you run the application as it currently stands, you’ll see the string TBC – our design-time string.

Next, copy the Strings.xaml file from the application to both the Resources_en-GB and Resources_en-US and change the string text to represent your GB and US strings for header – I used the word Colour in GB and Color in US – just to demonstrate the common language differences.

Now if you build and run the application, you’ll see the default header text still, so we now need to make the application set the resources at start-up and allow us to easily switch them. So change the buttons in MainWindow.xaml to these

<Button Content="US" Width="100" Margin="10" Click="US_OnClick"/>
<Button Content="GB" Width="100" Margin="10" Click="GB_OnClick"/>

We’re going to simply use code behind for changing the resource in this demo. So in MainWindow.xaml.cs add the following code

private void LoadStringResource(string locale)
{
   var resources = new ResourceDictionary();

   resources.Source = new Uri("pack://application:,,,/Resources_" + locale + ";component/Strings.xaml", UriKind.Absolute);

   var current = Application.Current.Resources.MergedDictionaries.FirstOrDefault(
                    m => m.Source.OriginalString.EndsWith("Strings.xaml"));


   if (current != null)
   {
      Application.Current.Resources.MergedDictionaries.Remove(current);
   }

   Application.Current.Resources.MergedDictionaries.Add(resources);
}

private void US_OnClick(object sender, RoutedEventArgs e)
{
   LoadStringResource("en-US");
}

private void GB_OnClick(object sender, RoutedEventArgs e)
{
   LoadStringResource("en-GB");
}

and finally in the constructor let’s default to en-GB, so simply add this line after the InitializeComponent

LoadStringResource("en-GB");

Now run the application, be default you should see en-GB strings and then press the US button to see the en-US version etc.

Finishing touches

In some situations we might want to switch the languages strings used via an option (very useful when debugging but also in you’re natural language is not the same as the default on your machine). In most cases, we’re likely to want to switch the language at start-up to match the machines’s culture/language.

Using ResourceDictionary might look a little more complex than CSV files, but should be easy for your translators to use and being, ultimately, XML – we could ofcourse write a simple application to allow the translators to view strings etc. in a tabular format.

We can deploy as many or as few localized resources as we need on a machine.

References

Checkout this a useful document WPF Globalization and Localization Overview from Microsoft.

Scientist in the making (aka using Science.NET)

When we’re dealing with refactoring legacy code, we’ll often try to ensure the existing unit tests (if they exist) or new ones cover as much of the code as possible before refactoring it. But there’s always a concern about turning off the old code completely until we’ve got a high confidence in the new code. Obviously the test coverage figures and unit tests themselves should give us that confidence, but wouldn’t it by nice to maybe we instead ran the old and new code in parallel and compare the behaviour or at least the results of the code? This is where the Scientist library comes in.

Note: This is very much (from my understanding) in an alpha/pre-release stage of development, so any code written here may differ from the way the library ends up working. So basically what I’m saying is this code works at the time of writing.

Getting started

So the elevator pitch for Science.NET is that it “allows us to two difference implementations of code, side by side and compare the results”. Let’s expand on that with an example.

First off, we’ll set-up our Visual Studio project.

  • Create a new console application (just because its simple to get started with)
  • From the Package Manager Console, execute Install-Package Scientist -Pre

Let’s start with a very simple example, let’s assume we have a method which returns a numeric value, we don’t really need to worry much about what this value means – but if you like a back story, let’s assume we import data into an application and the method calculates the confidence that the data matches a known import pattern.

So the legacy code, or the code we wish to verify/test against looks like this

public class Import
{
   public float CalculateConfidenceLevel()
   {
       // do something clever and return a value
       return 0.9f;
   }
}

Now our new Import class looks like this

public class NewImport
{
   public float CalculateConfidenceLevel()
   {
      // do something clever and return a value
      return 0.4f;
   }
}

Okay, okay, I know the result is wrong, but this is mean’t to demonstrate the Science.NET library not my Import code.

Right, so what we want to do is run the two versions of the code side-by-side and see whether the always give the same result. So we’re going to simply run these in our console’s Main method for now but ofcourse the idea is this code would be run from wherever you currently run the Import code from. For now just add the following to Main (we’ll discuss strategies for running the code briefly after this)

var import = new Import();
var newImport = new NewImport();

float confidence = 
   Scientist.Science<float>(
      "Confidence Experiment", experiment =>
   {
      experiment.Use(() => import.CalculateConfidenceLevel());
      experiment.Try(() => newImport.CalculateConfidenceLevel());
   });

Now, if you run this console application you’ll see the confidence variable will have the value 0.9 in it as it’s used the .Use code as the result, but the Science method (surely this should be named the Experiment method :)) will actually run both of our methods and compare the results.

Obviously as both the existing and new implementations are run side-by-side, performance might be a concern for complex methods, especially if running like this in production. See the RunIf method for turning on/off individual experiments if this is a concern.

The “Confidence Experiment” string denotes the name of the comparison test and can be useful in reports, but if you ran this code you’ll have noticed everything just worked, i.e. no errors, no reports, nothing. That’s because at this point the default result publisher (which can be accessed via Scientist.ResultPublisher) is an InMemoryResultPublisher we need to implement a publisher to output to the console (or maybe to a logger or some other mechanism).

So let’s pretty much take the MyResultPublisher from Scientist.net but output to console, so we have

 public class ConsoleResultPublisher : IResultPublisher
{
   public Task Publish<T>(Result<T> result)
   {
      Console.WriteLine(
          $"Publishing results for experiment '{result.ExperimentName}'");
      Console.WriteLine($"Result: {(result.Matched ? "MATCH" : "MISMATCH")}");
      Console.WriteLine($"Control value: {result.Control.Value}");
      Console.WriteLine($"Control duration: {result.Control.Duration}");
      foreach (var observation in result.Candidates)
      {
         Console.WriteLine($"Candidate name: {observation.Name}");
         Console.WriteLine($"Candidate value: {observation.Value}");
         Console.WriteLine($"Candidate duration: {observation.Duration}");
      }

      if (result.Mismatched)
      {
         // saved mismatched experiments to DB
      }

      return Task.FromResult(0);
   }
}

Now insert the following before the float confidence = line input our Main method

Scientist.ResultPublisher = new ConsoleResultPublisher();

Now when you run the code you’ll get the following output in the console window

Publishing results for experiment 'Confidence Experiment'
Result: MISMATCH
Control value: 0.9
Control duration: 00:00:00.0005241
Candidate name: candidate
Candidate value: 0.4
Candidate duration: 00:00:03.9699432

So now you’ll see where the string in the Science method can be used.

More…

Checkout the documentation on Scientist.net of the source itself for more information.

Real world usage?

First off let’s revisit how we might actually design our code to use such a library. The example was created from scratch to demonstrate basic use of the library, but it’s more likely that we’d either create an abstraction layer which instantiates and executes the legacy and new code or if available add the new method to the legacy implementation code. So in an ideal worlds our Import and NewImport methods might implement an IImport interface. Thus it would be best to implement a new version of this interface and within the methods call the Science code, for example

public interface IImport
{
   float CalculateConfidenceLevel();
}

public class ImportExperiment : IImport
{
   private readonly IImport import = new Import();
   private readonly IImport newImport = new Import();

   public float CalculateConfidenceLevel()
   {
      return Scientist.Science<float>(
         "Condfidence Experiment", experiment =>
         {
            experiment.Use(() => import.CalculateConfidenceLevel());
            experiment.Try(() => newImport.CalculateConfidenceLevel());
         });
   }
}

I’ll leave the reader to put the : IImport after the Import and NewImport classes.

So now our Main method would have the following

Scientist.ResultPublisher = new ConsoleResultPublisher();

var import = new ImportExperiment();
var result = import.CalculateConfidenceLevel();

Using an interface like this now means it’s both easy to switch from the old Import to the experiment implementation and eventually to the new implementation, but then hopefully this is how we always code. I know those years of COM development make interfaces almost the first thing I write along with my love of IoC.

And more…

Comparison replacement

So the simple example above demonstrates the return of a primitive/standard type, but what if the return is one of our own more complex objects and therefore more complex comparisons? We can implement an

experiment.Compare((a, b) => a.Name == b.Name);

ofcourse we could hand this comparison off to a more complex predicate.

Unfortunately the Science method expects a return type and hence if your aim is to run two methods with a void return and maybe test some encapsulated data from the classes within the experiment, then you’ll have to do a lot more work.

Toggle on or off

The IExperiment interface which we used to call .Use and .Try also has the method RunIf which I mentioned briefly earlier. We might wish to write our code in such a way that the dev environment runs the experiments but production does not, ensuring our end user’s do not suffer performances hits due to the experiment running. We can use RunIf in the following manner

experiment.RunIf(() => !environment.IsProduction);

for example.

If we needed to include this line in every experiment it might be quite painful, so it’s actually more likely we’d use this to block/run specific experiments, so maybe we run all experiments in all environment, except one very slow experiment.

To enable/disable all experiments, instead we can use

Scientist.Enabled(() => !environment.IsProduction);

Note: this method is not in the NuGet package I’m using but is in the current source on GitHub and in the documentation so hopefully it works as expected in a subsequent release of the NuGet package.

Running something before an experiment

We might need to run something before an experiment starts but we want the code within the context of the experiment, a little like a test setup method, we can use

experiment.BeforeRun(() => BeforeExperiment());

in the above we’ll run some method BeforeExperiment() before the experiment continues.

Finally

I’ve not covered all the currently available methods here as the Scientist.net repository already does that, but hopefully I’m given a peek into what you might do with this library.

ASP.NET MVC and IoC

This should be a nice short post.

As I use IoC a lot in my desktop applications I also want similar capabilities in an ASP.NET MVC application. I’ll use Unity as the container initally.

  • Create a new project using the Templates | Web | ASP.NET Web Application option in the New Project dialog in Visual Studio, press OK
  • Next Select the MVC Template and change authentication (if need be) and check whether to host in the cloud or not, then press OK
  • Select the References section in your solution explorer, right mouse click and select Manage NuGet Packages
  • Locate the Unity.Mvc package and install it

Once installed we need to locate the App_Start/UnityConfig.cs file and within the RegisterTypes method we add our mappings as usual, i.e.

container.RegisterType<IServerStatus, ServerStatus>();

There are also other IoC container NuGet packages including NInject (NInject.MVCx), with this we simply install the package relevent to our version of MVC, for example NInject.MVC4 and now we are supplied with the App_Start/NinjectWebCommon.cs file where we can use the RegisterServices method to register our mappings, i.e.

kernel.Bind<IServerStatus>().To<ServerStatus>();

More…

See Extending NerdDinner: Adding MEF and plugins to ASP.NET MVC for information on using MEF with ASP.NET.

Returning to Entity Framework database first

After working with a database project in Visual Studio, I thought it was probably time to create a simple console application to interact with the database using the current version of Entity Framework (v6.0).

So as we’ve already created the cddb database in a previous post, we’ll simply create a new console project and work with that DB.

  • Create your application, as stated mine is a console application
  • Add new item and select Data | ADO.NET Entity Data Model, mine’s named CddbContext (as this will include the source for the data context created for EF)
  • Select Code First from database
  • Create a new connection and supply the relevant details for your database connection
  • Press next then select the tables (and views) you want to generate code for – then click Finish

Here’s the code generated

CddbContext.cs

public partial class CddbContext : DbContext
{
   public CddbContext()
      : base("name=CddbContext")
   {
   }

   public virtual DbSet<album> albums { get; set; }
   public virtual DbSet<artist> artists { get; set; }

   protected override void OnModelCreating(DbModelBuilder modelBuilder)
   {
      modelBuilder.Entity<artist>()
         .HasMany(e => e.albums)
         .WithRequired(e => e.artist)
         .WillCascadeOnDelete(false);
   }
}

artist.cs

[Table("artist")]
public partial class artist
{
   [System.Diagnostics.CodeAnalysis.SuppressMessage(
    "Microsoft.Usage", 
    "CA2214:DoNotCallOverridableMethodsInConstructors")]
   public artist()
   {
      albums = new HashSet<album>();
   }

   public int Id { get; set; }

   [Required]
   [StringLength(50)]
   public string Name { get; set; }

   [System.Diagnostics.CodeAnalysis.SuppressMessage(
    "Microsoft.Usage", 
    "CA2227:CollectionPropertiesShouldBeReadOnly")]
   public virtual ICollection<album> albums { get; set; }
}

album.cs

[Table("album")]
public partial class album
{
   public int Id { get; set; }

   [Required]
   [StringLength(50)]
   public string Title { get; set; }

   public int ArtistId { get; set; }

   public virtual artist artist { get; set; }
}

finally let’s create a simple but of code to get the artists from the database, so in Main we have

using (var db = new CddbContext())
{
   var artists = db.artists;
   foreach (var a in artists)
   {
      Console.WriteLine(a.Name);
   }
}

If your database schema changes you will need to re-run the steps to generate your data context etc. or code by hand. There isn’t (currently) a way to update existing classes – so don’t make changes to the generated code and expect it to still exist after regeneration.