Category Archives: Programming

Remoting using WCF

It’s been a long while since I had to write any remoting code. However I came across a requirement to run a small out of process application which needed to be controlled by another application, I won’t bore you with the details, but suffice to say remoting seemed to be a good fit for this.

Let’s look at some code

Let’s start by creating the server/service. In this instance I will host the service in a Console application. So we need to create a Console solutions/project (mine’s named RemoteServer) and then (to allow us to reuse an interface file) add a Class library (mine’s named RemoteInterfaces).

Add the following to the class library

[ServiceContract]
public interface IRemoteService
{
   [OperationContract(IsOneWay = true)]
   void Write(string message);
}

Note: you’ll need a reference to System.ServiceModel in the RemoteInterfaces project, plus using System.ServiceModel; in the source

Add the class library as a reference to the RemoteServer console application.

Now, let’s create the implementation for the server…

[ServiceBehavior(
 ConcurrencyMode = ConcurrencyMode.Single,
 InstanceContextMode = InstanceContextMode.Single)]
public class RemoteService :
   IRemoteService
{
   private ServiceHost host;

   public void Start()
   {
      host = new ServiceHost(this);
      host.Open();
   }

   public void Stop()
   {
      if (host != null)
      {
         if (host.State != CommunicationState.Closed)
         {
            host.Close();
         }
      }
   }

   public void Write(string message)
   {
      Console.WriteLine(String.Format("[Service] {0}", message));
   }
}

We’re going to run this service as a singleton when the console starts, so in Program.cs we need to write the following

try
{
   var svc = new RemoteService();
   svc.Start();

   Console.ReadLine();

   svc.Stop();
}
catch (Exception e)
{
   Debug.WriteLine(e.Message);
}

Finally we need to put some configuration in place to tell WCF how this service is to be hosted, so in the console application, add an App.config (if not already available) and add the following

finally App.config looks like

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<system.serviceModel>
    <services>
      <service name="RemoteServer.RemoteService">
        <endpoint binding="netTcpBinding"
                  contract="RemoteInterfaces.IRemoteService">
          <identity>
            <dns value="localhost"/>
          </identity>
        </endpoint>
        <host>
          <baseAddresses>
            <add baseAddress="net.tcp://localhost:8000/RemoteService"/>
          </baseAddresses>
        </host>
      </service>
    </services>
  </system.serviceModel></configuration>

Client implementation

Now, lets create a client to tests this service. For simplicity, we’ll assume the server is run automatically, either at startup or via our client application, but we won’t bother writing code for this. So when you’re ready to test the client just ensure you’ve run the server console first. You’ll need to reference the class library with the IRemoteService (RemoteInterfaces) also.

Note: you’ll need a reference to System.ServiceModel in the RemoteClient project, plus using System.ServiceModel; in the source

Change the client Main method to include the following

var channelFactory = new ChannelFactory<IRemoteService>(
   new NetTcpBinding(),
   new EndpointAddress("net.tcp://localhost:8000/RemoteService"));
var channel = channelFactory.CreateChannel();

channel.Write("Hello World");

channelFactory.Close();

Now if you run this client, the server should output the Hello World text.

We’ve initially created this client by hard coding the endpoint etc. but it might be better if this was in the configuration.

In your App.config you could have the following

<system.serviceModel>
   <client configSource="Config\servicemodel-client-config.xml" />
</system.serviceModel>	

Note: Obviously the name of the file us upto you.

or we could place the service code directly in the App.config, it should look like this (if in an external config file)

<?xml version="1.0" encoding="utf-8"?>
<client>
   <endpoint name="RemoteServer.RemoteService"
      address="net.tcp://localhost:8000/RemoteService"
      binding="netTcpBinding"
      contract="RemoteInterfaces.IRemoteService">
      <identity>
         <servicePrincipalName value=""/>
      </identity>
   </endpoint>
</client>

Note: If placed within the App.config, wrap the client element in a system.serviceModel element

and now we can change our ChannelFactory in the Main method to look like this

var channelFactory = new ChannelFactory<IRemoteService>("*");

The configuration will thus be taken from the configuration file.

Let’s switch from using TCP to using a named pipe

So the previous code and configuration should work correctly using the TCP/IP protocol, but it does rely on a known port to be used. In scenarios where you’re deploying to a machine where you’re not 100% sure the port is available, you could ofcourse offer up several port options.

Another choice (which should also be more performant, although I’ve not tested this) for interprocess communications on the same machine, is to use a named pipe.

For this change the service’s App.config (or external file) on the server to use the following configuration

<services>
   <service name="RemoteServer.RemoteService">
      <endpoint binding="netNamedPipeBinding"
         contract="RemoteInterfaces.IRemoteService">
         <identity>
            <dns value="localhost"/>
         </identity>
       </endpoint>
       <host>
          <baseAddresses>
            <add baseAddress="net.pipe://localhost/RemoteServer/RemoteService"/>
          </baseAddresses>
       </host>
   </service>
</services>

and in the client code we could use

channelFactory = new ChannelFactory<IRemoteService>(
                new NetNamedPipeBinding(),
                new EndpointAddress("net.pipe://localhost/RemoteServer/RemoteService"));

or replacing with configuration we could use replace our client configuration with

<client>
   <endpoint name="RemoteServer.RemoteService"
      address="net.pipe://localhost/RemoteServer/RemoteService"
      binding="netNamedPipeBinding"
      contract="RemoteInterfaces.IRemoteService">
      <identity>
         <servicePrincipalName value=""/>
      </identity>
    </endpoint>
</client>

Debugging

You might at times find issues with your client not finding an endpoint, an ipc name or port etc. How can we check if our server is actually connected to a port or pipe?

For tcp/http protocols, we can use netstat, for example

netstat -a | findstr "8000"

this will list display all connections and listening ports (-a switch) and pipe these through the findstr application to display only those with 8000 in the output. Hence if we’re running a server off of port 8000 it should be listed and we’ll know that the server is (at least) running.

For ipc/pipe, the simplest way is via Powershell by running the following

[System.IO.Directory]::GetFiles("\\.\\pipe\\")

this will list all pipes, which might be a little over the top.

An alternative to the above is use the sysinternals PipeList executable.

Whilst we could use the following for old style remoting code (to find a named pipe)

[System.IO.Directory]::GetFiles("\\.\\pipe\\") | findstr "RemoteService"

unfortunately this does not work for WCF named pipes as these are converted to GUID’s. See Named Pipes in WCF are named but not by you (and how to find the actual windows object name).

Debugger Attributes

There’s several debugging types of attributes I use fairly regularily, such as DebuggerStepThrough and DebuggerDisplay, but I decided to take a look at some of the other, similar, attributes to see if I was missing anything, here’s what I found out.

Introduction

At the time of writing these are the debugging attributes I found

  • DebuggerStepThrough
  • DebuggerDisplay
  • DebuggerNonUserCode
  • DebuggerHidden
  • DebuggerBrowsable
  • DebuggerTypeProxy
  • DebuggerStepperBoundary

Before I get into the specific attributes, I will be listing the attribute targets that the attribute may be applied to, when it says an attribute may be set on a method, this also includes the setter/getter (methods) on a property. But not the property itself, this requires the AttributeTargets.Property target to be set.

DebuggerStepThrough

References

DebuggerStepThroughAttribute Class

Usage

  • Class
  • Struct
  • Constructor
  • Method

Synopsis

Let’s start with the one I used a lot in the past (on simple properties, i.e. equivalent of auto-properties). Placing a DebuggerStepThrough on any of the usage targets, will inform the debugger (in Visual Studio, not tested in Xamarin Studio as yet) to simply step through (or in debugging terms, step over) the code. In other words the debugger will not enter a method call.

This is especially useful for situations where, maybe you have simple code, such as properties which just return or set a value and this will save you stepping into this property setters or getters during a debugging session.

Example

public class MyClass
{
   private int counter;

   [DebuggerStepThrough]
   public int Increment()
   {
      return counter++;
   }
}

As stated in the usage, you can also apply this to the class itself to stop the debugger stepping into any code in the class. Useful if the class is full of methods etc. which you don’t need to step into.

DebuggerDisplay

Next up, is the DebuggerDisplayAttribute.

References
DebuggerDisplayAttribute Class

Usage

  • Assembly
  • Class
  • Struct
  • Enum
  • Property
  • Field
  • Delegate

Synopsis

In situations where you’re debugging and you place the mouse pointer over the an instance of MyClass (as defined previously) you’ll see the popup say something like

{MatrixTests.MyClass}

next to the variable name. This is fine if all you want is the type name or if you want to drill into the instance to view fields or properties but if you’ve got many fields or properties, it might be nicer if the important one(s) are displayed instead of the type name.

Before we delve into DebuggerDisplay, if your class implements a ToString override method then this will normally be displayed by the debugger. If you’re using this for something else, then it will not fulfill the requirements for displaying something different when the instance is hovered over. But if this ToString has everything you need then there’s no need for a DebuggerDisplay attribute.

Example

Let’s start by seeing the ToString and DebuggerDisplay that would be equivalent

public override string ToString()
{
   return $"Counter = {counter}";
}

// equivalent to 

[DebuggerDisplay("Counter = {counter}")]
public class MyClass
{
   // code
}

Both of the above will show debugger popup of “Counter = 0” (obviously 0 is replaced with the current counter value).

As you can see within the DebuggerDisplayAttribute, we can include text and properties/fields we just need to enclose them in {}. We can also use methods, within the DebuggerDisplay. Let’s assume we have a private method which creates the debugger output

[DebuggerDisplay("{DebuggerString()}")]
public class MyClass
{
   private string DebuggerString()
   {
      return $"Counter = {counter}";
   }
   // other code
}

As stated previously these all show a debugger display of “Counter = 0” – they also display quotes around the text. We can remove these by specifying nq (nq == no quotes), as per

[DebuggerDisplay("{DebuggerString(), nq}")]

So now the output display is Counter = 0 i.e. without quotes.

You can, ofcourse, extend format to display more fields/properties as per C# 6.0 string interpolation. i.e. if we have a matrix class with two properties, Rows and Columns and want to display these in the debugger display we can use

[DebuggerDisplay("Rows = {Rows}, Columns = {Columns}")]

We can also extend this syntax to use basic expressions, for example

[DebuggerDisplay("Counter = {counter + 10}")]

the above code will evaluate the counter field and then add 10 (as I’m sure you guessed).

Gotcha’s

Just remember that whatever you place in the DebuggerDisplay will be executed when you view it. So if you’re calling a remote service or database, be ready for the performance hit etc.

DebuggerNonUserCode

References

DebuggerNonUserCodeAttribute Class
Using the DebuggerNonUserCode Attribute in Visual Studio 2015
Change Debugger behavior with Attributes

Usage

  • Class
  • Struct
  • Constructor
  • Method
  • Property

Synopsis

This is very similar to DebuggerStepThrough, but as you can see from the usage, also allows us to apply this attribute to a property as a whole.

Examples

Used in the same way as DebuggerStepThrough, see those examples.

DebuggerHidden

References

DebuggerHiddenAttribute Class
Using the DebuggerNonUserCode Attribute in Visual Studio 2015
Change Debugger behavior with Attributes

Usage

  • Constructor
  • Method
  • Property

Synopsis

This is very similar to DebuggerStepThrough and DebuggerNonUserCode, but as you can see from the usage it’s somewhat restricted on what it can be applied to.

Examples

Used in the same way as DebuggerStepThrough, see those examples.

DebuggerBrowsable

References

DebuggerBrowsableAttribute Class

Usage

  • Property
  • Field

Synopsis

If you are viewing an instance of the MyClass (shown previously) within the popup debug window (displayed when you hover over the instance variable during a Visual Studio debugging session), you will see a + sign to the left of the class, meaning you can view the fields/properties in this class, but in some situations you might want to reduce the fields/properties (or what might be seen as noise). This is especially useful in situations where maybe you have lots of fields or properties which are maybe of limited usefulness in a debugging scenario, or maybe pre auto properties, you might wish to “hide” all fields. Then simply mark the field or property with

[DebuggerBrowsable(DebuggerBrowsableState.Never)]
private int counter;

Now when you view the instance of MyClass, no + will appear as I’ve stipulated never display this field as browseable.

The other options for DebuggerBrowsableState are

  • DebuggerBrowsableState.Collapsed
  • DebuggerBrowsableState.RootHidden

as per Microsoft documentation…

Collapsed shows the element as collapsed whilst RootHidden does not display the root element; display the child elements if the element is a collection of array items.

Examples

// never show the field in the debugger popup
// wtahc or auto windows
[DebuggerBrowsable(DebuggerBrowsableState.Never)]
private int counter;

Basically RootHidden used on an array property (for example)

public int[] Counters
{
   //
}

will not display Counters, but will displace the array of elements.

DebuggerTypeProxy

References

Using DebuggerTypeProxy Attribute

Usage

  • Assembly
  • Class
  • Struct

Synopsis

When we looked at the DebuggerDisplay we found we could reference fields/properties within the class and even use a limited set of expressions. But in some scenarios, such as the one where we added a DebuggerString to the class, it might be preferable to extract the debugger methods/properties etc. to ensure the readability and maintainability of the original class.

Examples

Let’s start by changing the MyClass code to this

[DebuggerTypeProxy(typeof(MyClassDebuggerProxy))]
public class MyClass
{
   private int counter;

   public int Counter => counter;

   public int Increment()
   {
      return counter++;
   }
}

I’ve added a property Counter, to allow us to use the counter in our proxy

Now create a simple proxy that looks like this

public class MyClassDebuggerProxy
{
}

If you now try to view the fields on an instance of MyClass the debugger will use the MyClassDebuggerProxy and in this instance there are no fields/properties and therefore nothing will be displayed (well they will if you expand Raw View but not at the top level in the debugger). So in this form, it’s a simple way to hide everything from the debugger – if that is required. Let’s be honest that’s not usually the intention, so let’s add some properties to the proxy and use the MyClass.Counter property.

public class MyClassDebuggerProxy
{
   private MyClass _instance;

   public MyClassDebuggerProxy(MyClass instance)
   {
      _instance = instance;
   }

   public int DebuggerCounter {  get { return _instance.Counter; } }
}

Now when you debug an instance of MyClass, the debugger will display the property DebuggerCounter from the proxy class. The _instance field will not be displayed. This example is a little pointless, but unlike the limited expression capability of DebuggerDisplay, the full capabilities of .NET can be used to add more data to the debugger, or change debugger displayed data.

Note: If you want to display something other than the type for the instance of MyClass, then you’ll still need to apply a DebuggerDisplay attribute to the MyClass object, not to the proxy as this will be ignored.

DebuggerStepperBoundary

References

DebuggerStepperBoundaryAttribute Class

Usage

  • Constructor
  • Method

Synopsis

This is a slightly odd concept (well it is to me at the moment). Using this attribute on a method (say our Increment method in MyClass) will in essence tell the debugger to switch from stepping mode to run mode. In other words, if you have a breakpoint on a call to myClass.Increment then press F10 or F11 (to step over or step in to the Increment method) the debugger will see the DebuggerStepperBoundary and instead do the equivalent of F5 (or run). From this perspective the use of such an attribute seems fairly limited, however the Microsoft documentation (shown in the references above) states…

“When context switches are made on a thread, the next user-supplied code module stepped into may not relate to the code that was in the process of being debugged. To avoid this debugging experience, use the DebuggerStepperBoundaryAttribute to escape from stepping through code to running code.”

Covariance and contravariance

Before we begin, let’s define the simplest of classes/hierarchies for our test types…

The following will be used in the subsequent examples

class Base { }
class Derived : Base { }

With that out of the way, let’s look at what covariance and contravariance mean in the context of a programming language (in this case C#).

The covariant and contravariant described, below, can be used on interfaces or delegates only.

Invariant

Actually we’re not going straight into covariance and contravariance, let’s instead look at what an invariant generic type might look like (as we’ll be building on this in the examples). Invariant is what our “normal” generics or delegates might look like.

Here’s a fairly standard use of a generic

interface IProcessor<T> 
{
}

class Processor<T> : IProcessor<T>
{        
}

If we try to convert an IProcessor<Derived> to an IProcessor<Base> or vice versa, we’ll get a compile time error, i.e. here’s the code that we might be hoping to write

Now if we wanted to do the following

IProcessor<Derived> d = new Processor<Derived>();
IProcessor<Base> b = d;

// or

IProcessor<Base> b = new Processor<Base>();
IProcessor<Derived> d = b;

So, in the above we’re creating a Processor with the generic type Derived (in the first instance) and we might have a method that expects an IProcessor. In our example we simulate this with the assignment of d to b. This will fail to compile. Likewise the opposite is where we try to assign a IProcessor to an IProcessor, again this will fail to compile. At this point the IProcessor/Processor are invariant.

Obviously with the derivation of Base/Derived, this would probably be seen as a valid conversion, but not for invariant types.

With this in mind let’s explore covariance and contravariance.

Covariance

Covariance is defined as enabling us to “use a more derived type than originally specified” or to put it another way. If we have an IList<Derived> we can assign this to a variable of type IList<Base>.

Using the code

IProcessor<Derived> d = new Processor<Derived>();
IProcessor<Base> b = d;

we’ve already established, this will not compile. If we add the out keyword to the interface though, this will fix the issue and make the Processor covariant.

Here’s the changes we need to make

interface IProcessor<out T> 
{
}

No changes need to be made on the implementation. Now our assignment from a derived type to a base type will succeed.

Contravariance

As you might expect, if covariance allows us to assign a derived type to a base type, contravariance allows us to “use a more generic (less derived) type than originally specified”.

In other words, what if we wanted to do something like

IProcessor<Base> b = new Processor<Base>();
IProcessor<Derived> d = b;

Ignore the fact that b cannot possibly be the same type in this example (i.e. Derived)

Again, you may have guessed, if we used an out keyword for covariance and contravariance is (sort of) the opposite, then the opposite of out will be the in keyword, hence we change the interface only to

interface IProcessor<in T>
{
}

Now our code will compile.

What if we want to extend a covariant/contravariant interface?

As noted, the in/out keywords are used on interfaces or delegates, if we wanted to extend an interface that supports one of these keywords, then we would apply the keyword to the extended interfaces, i.e.

interface IProcessor<out T>
{
}

interface IExtendedProcessor<out T> : IProcessor<T>
{
}

obviously we must keep the interface the same variance as the base interface (or not mark it as in/out. We apply the keyword as above.

References

Covariance and Contravariance in GenericsCovariance and Contravariance FAQ
in (Generic Modifier) (C# Reference)
out (Generic Modifier) (C# Reference)

Fun with Expression objects

Expressions are very useful in many situation. For example, you have a method which you pass a lambda expression to and the method can then determine the name of the property the lambda is using or the method name etc. Or maybe you’ve used them in the past in view model implementations pre-nameof. They’re also used a lot in mocking frameworks, to make the arrange steps nice and terse.

For example

Expressions.NameOf(x => x.Property);

In the above example the lambda expression is the x => x.Property and the NameOf method determines the name of the property, i.e. it returns the string “Property” in this case.

Note: This example is a replacement for the nameof operator for pre C# 6 code bases.

Let’s look at some implementations of such expression code.

InstanceOf

So you have code in the format

Expressions.InstanceOf(() => s.Clone());

In this example Expressions is a static class and InstanceOf should extract the instance s from the expression s.Clone(). Obviously we might also need to use the same code of a property, i.e. find the instance of the object with the property.

public static object InstanceOf<TRet>(Expression<Func<TRet>> funcExpression)
{
   var method = funcExpression.Body as MethodCallExpression;
   // if not a methood, try to get the body (possibly it's a property)
   var memberExpression = method == null ? 
      funcExpression.Body as MemberExpression : 
      method.Object as MemberExpression;

   if(memberExpression == null)
      throw new Exception("Unable to determine expression type, expected () => vm.Property or () => vm.DoSomething()");

   // find top level member expression
   while (memberExpression.Expression is MemberExpression)
      memberExpression = (MemberExpression)memberExpression.Expression;

   var constantExpression = memberExpression.Expression as ConstantExpression;
   if (constantExpression == null)
      throw new Exception("Cannot determine constant expression");

   var fieldInfo = memberExpression.Member as FieldInfo;
   if (fieldInfo == null)
      throw new Exception("Cannot determine fieldinfo object");

   return fieldInfo.GetValue(constantExpression.Value);
}

NameOf

As this was mentioned earlier, let’s look at the code for a simple nameof replacement using Expressions. This implementation only works with properties

public static string NameOf<T>(Expression<Func<T>> propertyExpression) 
{
   if (propertyExpression == null)
      throw new ArgumentNullException("propertyExpression");

   MemberExpression property = null;

   // it's possible to end up with conversions, as the expressions are trying 
   // to convert to the same underlying type
   if (propertyExpression.Body.NodeType == ExpressionType.Convert)
   {
      var convert = propertyExpression.Body as UnaryExpression;
      if (convert != null)
      {
         property = convert.Operand as MemberExpression;
      }
   }

   if (property == null)
   {
      property = propertyExpression.Body as MemberExpression;
   }
   if (property == null)
      throw new Exception(
         "propertyExpression cannot be null and should be passed in the format x => x.PropertyName");

   return property.Member.Name;
}

NUnit’s TestCaseSourceAttribue

When we use the TestCaseAttribute with NUnit tests, we can define the parameters to be passed to a unit test, for example

[TestCase(1, 2, 3)]
[TestCase(6, 3, 9)]
public void Sum_EnsureValuesAddCorrectly1(double a, double b, double result)
{
   Assert.AreEqual(result, a + b);
}

Note: In a previous release the TestCaseAttribute also had Result property, but this doesn’t seem to be the case now, so we’ll expect the result in the parameter list.

This is great, but what if we want our data to come from a dynamic source. We obviously cannot do this with the attributes, but we could using the TestCaseSourceAttribute.

In it’s simplest form we could rewrite the above test like this

[Test, TestCaseSource(nameof(Parameters))]
public void Sum_EnsureValuesAddCorrectly(double a, double b, double result)
{
   Assert.AreEqual(result, a + b);
}

private static double[][] Parameters =
{
   new double[] { 1, 2, 3 },
   new double[] { 6, 3, 9 }
};

an alternative to the above is to return TestCaseData object, as follows

[Test, TestCaseSource(nameof(TestData))]
public double Sum_EnsureValuesAddCorrectly(double a, double b)
{
   return a + b;
}

private static TestCaseData[] TestData =
{
   new TestCaseData(1, 2).Returns(3),
   new TestCaseData(6, 3).Returns(9)
};

Note: In both cases, the TestCaseSourceAttribute expects a static property or method to supply the data for our test.

The property which returns the array (above) doesn’t need to be in the Test class, we could have a separate class, such as

[Test, TestCaseSource(typeof(TestDataClass), nameof(TestData))]
public double Sum_EnsureValuesAddCorrectly(double a, double b)
{
   return a + b;
}

class TestDataClass
{
   public static IEnumerable TestData
   {
      get
      {
         yield return new TestCaseData(1, 2).Returns(3);
         yield return new TestCaseData(6, 3).Returns(9);
      }
   }
}

Extended or unit test capabilities using the TestCaseSource

If we take a look at NBench Performance Testing – NUnit and ReSharper Integration we can see how to extend our test capabilities using NUnit to run our extensions. i.e. with NBench we want to create unit tests to run performance tests within the same NUnit set of tests (or separately but via the same test runners).

I’m going to recreate a similar set of features for a more simplistic performance test.

Note: this code is solely to show how we can create a similar piece of testing functionality, it’s not mean’t to be compared to NBench in any way, plus NUnit also has a MaxTimeAttribute which would be sufficient for most timing/performance tests.

Let’s start by creating an attribute which will use to detect methods which should be performance tested. Here’s the code for the attribute

[AttributeUsage(AttributeTargets.Method)]
public class PerformanceAttribute : Attribute
{
   public PerformanceAttribute(int max)
   {
      Max = max;
   }

   public int Max { get; set; }
}

The Max property defines a max time (in ms) that a test method should take. If it takes longer than the Max value, we expect a failing test.

Let’s quickly create some tests to show how this might be used

public class TestPerf : PerformanceTestRunner<TestPerf>
{
   [Performance(100)]
   public void Fast_ShouldPass()
   {
      // simulate a 50ms method call
      Task.Delay(50).Wait();
   }

   [Performance(100)]
   public void Slow_ShouldFail()
   {
      // simulate a slow 10000ms method call
      Task.Delay(10000).Wait();
   }
}

Notice we’re not actually marking the class as a TestFixture or the methods as Tests, as the base class PerformanceTestRunner will create the TestCaseData for us and therefore the test methods (as such).

So let’s look at that base class

public abstract class PerformanceTestRunner<T>
{
   [TestCaseSource(nameof(Run))]
   public void TestRunner(MethodInfo method, int max)
   {
      var sw = new Stopwatch();
      sw.Start();
      method.Invoke(this, 
         BindingFlags.Instance | BindingFlags.InvokeMethod, 
         null, 
         null, 
         null);
      sw.Stop();

      Assert.True(
         sw.ElapsedMilliseconds <= max, 
         method.Name + " took " + sw.ElapsedMilliseconds
      );
   }

   public static IEnumerable Run()
   {
      var methods = typeof(T)
         .GetMethods(BindingFlags.Public | BindingFlags.Instance);
      
      foreach (var m in methods)
      {
         var a = (PerformanceAttribute)m.GetCustomAttribute(typeof(PerformanceAttribute));
         if (a != null)
         {
            yield return 
               new TestCaseData(m, a.Max)
                     .SetName(m.Name);
         }
      }
   }
}

Note: We’re using a method Run to supply TestCaseData. This must be public as it needs to be accessible to NUnit. Also we use SetName on the TestCaseData passing the method’s name, hence we’ll see the method as the test name, not the TestRunner method which actually runs the test.

This is a quick and dirty example, which basically locates each method with a PerformanceAttribute and yields this to allow the TestRunner method to run the test method. It simply uses a stopwatch to check how long the test method took to run and compares with the setting for Max in the PerformanceAttribute. If the time to run the test method is less than or equal to Max, then the test passed, otherwise it fails with a message.

When run via a test runner you should see a node in the tree view showing TestPerf, with a child of PerformanceTestRunner.TestRunner, then child nodes below this for each TestCaseData ran against the TestRunner, we’ll see the method names Fast_ShouldPass and Slow_ShouldFail – and that’s it, we’ve reused NUnit, the NUnit runners (such as ReSharper etc.) and created a new testing capability, the Performance test.

Auto generating test data with Bogus

I’ve visited this topic (briefly) in the past using NBuilder and AutoFixture.

When writing unit tests it would be useful if we can create objects quickly and with random or better still semi-random data.

When I say semi-random, what I mean is that we might have some type with an Id property and we know this Id can only have a certain range of values, so we want a value generated for this property within that range, or maybe we would like to have a CreatedDate property with some data that resembles n years in the past, as opposed to just random date.

This is where libraries such as Faker.Net and Bogus come in – they allow us to generate objects and data, which meets certain criteria and also includes the ability to generate data which “looks” like real data. For example, first names, jobs, addresses etc.

Let’s look at an example – first we’ll see what the “model” looks like, i.e. the object we want to generate test data for

public class MyModel
{
   public string Name { get; set; }
   public long Id { get; set; }
   public long Version { get; set; }
   public Date Created { get; set; }
}

public struct Date
{
   public int Day { get; set; }
   public int Month { get; set; }
   public int Year { get; set; }
}

The date struct was included because this mirrors a similar type of object I get from some web services and because it obviously requires certain constraints, hence seemed a good example of writing such code.

Now let’s assume that we want to create a MyModel object. Using Bogus we can create a Facker object and apply constraints or assign random data to. Here’s an example implementation

var modelFaker = new Faker<MyModel>()
   .RuleFor(o => o.Name, f => f.Name.FirstName())
   .RuleFor(o => o.Id, f => f.Random.Number(100, 200))
   .RuleFor(o => o.Version, f => f.Random.Number(300, 400))
   .RuleFor(o => o.Created, f =>
   {
      var date = f.Date.Past();
      return new Date { Day = date.Day, Month = date.Month, Year = date.Year };
   });

var myModel = modelFaker.Generate();

Initially we create the equivalent of a builder class, in this case the Faker. Whilst we can generate a MyModel without all the rules being set, the rules allow us to customize what’s generated to something more meaningful for our use – especially when it comes to the Date type. So in order, the rules state that the Name property on MyModel should resemble a FirstName, the Id is set to a random value within the range 100-200, likewise the Version is constrained to the range 300-400 and finally the Created property is set by generating a past date and assigning the day, month and year to our Date struct.

Finally we Generate an instance of a MyModel object via the Faker builder class. An example of the values supplied by the Faker object are shown below

Created - Day = 12, Month = 7, Year = 2016
Id - 116
Name - Gwen
Version - 312

Obviously this code only works for classes with a default constructor. So what do we do if there’s no default constructor?

Let’s add the following to the MyModel class

public MyModel(long id, long version)
{
   Id = id;
   Version = version;
}

Now we simply change our Faker to look like this

var modelFaker = new Faker<MyModel>()
   .CustomInstantiator(f => 
      new MyModel(
         f.Random.Number(100, 200), 
         f.Random.Number(300, 400)))
   .RuleFor(o => o.Name, f => f.Name.FirstName())
   .RuleFor(o => o.Create, f =>
   {
      var date = f.Date.Past();
      return new Date { Day = date.Day, Month = date.Month, Year = date.Year };
   });

What if you don’t want to create a the whole MyModel object via Faker, but instead, you just want to generate a valid looking first name for the Name property? Or what if you are already using something like NBuilder but want to just use the Faker data generation code?

This can easily be achieved by using the non-generic Faker. Create an instance of it and you’ve got access to the same data, so for example

var f = new Faker();

myModel.Name = f.Name.FirstName();

References

Bogus for .NET/C#

.NET Core

This should be a pretty short post, just to outline using .NET core on Linux/Ubuntu server via Docker.

We could look to install .NET core via apt-get, but I’ve found it so much simpler running up a Docker container with .NET core already implemented in, so let’s do that.

First up, we run

docker run -it microsoft/dotnet:latest

This is an official Microsoft owned container. The use of :latest means we should get the latest version each time we run this command. The -it switch switches us straight into the Docker instance when it’s started, I/e/ into bash.

Now this is great if you’re happy to lose any code within the Docker image when it’s removed, but if you want to link into your hosts file system it’s better to run

docker run -it -v /home/putridparrot/dev:/development microsoft/dotnet:latest

Where /home/putridparrot/dev on the left of the colon, is a folder on your host/server filesystem and maps to a folder inside the Docker instance which will be named development (i.e. it acts like a link/shortcut).

Now when you are within the Docker instance you can save files to the host (and vice/versa) and they’ll persist beyond the life of the Docker instance and also allow us a simply means of copying files from, say a Windows machine into the instance of dotnet on the Linux server.

And that literally is that.

But let’s write some code and run it to prove everything is working.

To be honest, you should go and look at Install for Windows (or one of the installs for Linux or Mac) as I’m pretty much going to recreate the documentation on running dotnet from these pages

To run the .NET core framework, this includes compiling code, we use the command

dotnet

Before we get going, .NET core is very much a preview/RC release, so this is the version I’m currently using (there’s no guarantee this will work the same way in a production release), running

dotnet --version

we get version 1.0.0-preview2-003131.

Let’s create the standard first application

Now, navigate to home and then make a directory for some source code

cd /home
mkdir HelloWorld
cd /HelloWorld

yes, we’re going to write the usual, Hello World application, but that’s simply because the dotnet command has a built in “new project” which generates this for us. So run

dotnet new

This creates two files, Program.cs looks like this

using System;

namespace ConsoleApplication
{
    public class Program
    {
        public static void Main(string[] args)
        {
            Console.WriteLine("Hello World!");
        }
    }
}

nothing particularly interesting here, i.e. its a standard Hello World implementation.

However a second file is created (which is a little more interesting), the project.json, which looks like this

{
  "version": "1.0.0-*",
  "buildOptions": {
    "debugType": "portable",
    "emitEntryPoint": true
  },
  "dependencies": {},
  "frameworks": {
    "netcoreapp1.0": {
      "dependencies": {
        "Microsoft.NETCore.App": {
          "type": "platform",
          "version": "1.0.1"
        }
      },
      "imports": "dnxcore50"
    }
  }
}

Now, we need to run the following from the same folder as the project (as it will use the project.json file)

dotnet restore

this will restore any packages required for the project to build. To build and run the program we use

dotnet run

What are the dotnet cmd options

Obviously you can run dotnet –help and find these out yourself, but just to give a quick overview, this is what you’ll see as a list of commands

Common Commands:
  new           Initialize a basic .NET project
  restore       Restore dependencies specified in the .NET project
  build         Builds a .NET project
  publish       Publishes a .NET project for deployment (including the runtime)
  run           Compiles and immediately executes a .NET project
  test          Runs unit tests using the test runner specified in the project
  pack          Creates a NuGet package

.NET Core in Visual Studio 2015

Visual Studio (2015) offers three .NET core specific project templates, Class Library, Console application and ASP.NET Core.

References

https://docs.microsoft.com/en-us/dotnet/articles/core/tutorials/using-with-xplat-cli
https://docs.asp.net/en/latest/getting-started.html
https://docs.microsoft.com/en-us/dotnet/articles/core/tools/project-json

A quick look at WPF Localization Extensions

Continuing my look into localizing WPF application…

I’ve covered techniques used by theme’s to change a WPF application’s resources for the current culture. I’ve looked at using locbaml for extracting resources and localizing them. Now I’m looking at WPF Localization Extensions.

The WPF localization extension take the route of good old resx files and markup extensions (along with other classes) to implement localization of WPF applications.

Getting Started

  • Create a WPF application
  • Using NuGet, install the WPFLocalizeExtension
  • Open the Properties section and copy Resources.resx and rename the copy Resources.fr-FR.resx (or whatever culture you wish to support)

As with my other examples of localizing WPF applications, I’m not going to put too much effort into developing a UI as it’s the concepts I’m more interested in at this time.

First off let’s add the following XAML to MainWindow.xaml within the Window namspaces etc.

xmlns:lex="http://wpflocalizeextension.codeplex.com"
lex:LocalizeDictionary.DesignCulture="en"
lex:LocalizeDictionary.OutputMissingKeys="True"
lex:ResxLocalizationProvider.DefaultAssembly="Localize2"
lex:ResxLocalizationProvider.DefaultDictionary="Resources"        

The DefaultDictionary needs to have the name of the resource file, whether it’s Resources (exluding the .resx) or if you’ve created one names Strings or whatever, just exclude the extension.

The DefaultAssembly is the name of the assembly to be used as the default for resources, i.e. in this case it’s the name of my project’s assembly.

Next up, within the Grid, we’re going to have this

<TextBlock Text="{lex:Loc Header}" />

Header is a key – obviously on a production ready application with more than one string to translate, we’d probably implement a better naming convention.

A tell tale sign things are wrong is you’ll see the text displayed as Key: Header, this likely points to one of the namespace values being incorrect, such as the DefaultDictionary name.

That’s it for the UI.

In the Resources.resx, add a string named Header and give it a Value, for example mine should default to English and hence will have Hello as the value. In the Resources.fr-FR.resx, add a Header name and give it the value Bonjour.

That’s the extent of our strings for this application. If you run the application you should see the default Resources string, “Hello”. So now let’s look at testing the fr-FR culture.

In App.xaml.cs create a default constructor and place this code within it

LocalizeDictionary.Instance.SetCurrentThreadCulture = true;
LocalizeDictionary.Instance.Culture = new CultureInfo("fr-FR");

Run the application again and the displayed string will be take from the fr-FR resource file.

To allow us to easily switch, at runtime, between cultures, we can use the LocalizeDictionary. Here’s a combo box selector to do this (taken from the sample source on the WPF Localization Extensions github page).

<ComboBox ItemsSource="{Binding Source={x:Static lex:LocalizeDictionary.Instance}, Path=MergedAvailableCultures}"
   SelectedItem="{Binding Source={x:Static lex:LocalizeDictionary.Instance}, Path=Culture}" DisplayMemberPath="NativeName" />

We also need to be able get strings from the selected resource in our code, here’s a simple static class from StackOverflow which allows us to get a string (or other type) from the currently selected resources

public static class LocalizationProvider
{
    public static T GetLocalizedValue<T>(string key)
    {
        return LocExtension.GetLocalizedValue<T>
(Assembly.GetCallingAssembly().GetName().Name + ":Resources:" + key);
    }
}

The string “Resources” should obviously be changed to the name of your resource files (for example if you’re using just strings in a “Strings” resource etc).

This is certainly simpler to set-up than locbaml, the obvious drawback with this approach is that the strings at design time are not very useful. But if, like me, you tend to code WPF UI primarily in XAML then this probably won’t concern you.

Localizing a WPF application using locbaml

This post is going to mirror the Microsoft post How to: Localize an Application but I’ll try to add something of value to it.

Getting Started

Let’s create a WPF Application, mine’s called Localize1. In the MainWindow add one or more controls – I’m going basic at this point with the following XAML within the Window element of MainWindow.xaml

<Grid>
   <TextBlock>Hello</TextBlock>
</Grid>

According to the Microsoft “How To”, we now place the following line in the csproj

<UICulture>en-US</UICulture>

So locate the end of the <PropertyGroup> elements and put the following

<PropertyGroup>
    <UICulture>en-GB</UICulture>
</PropertyGroup>

See AssemblyInfo.cs comment on using the UICulture element also

Obviously put the culture specific to your default locale, hence mine’s en-GB. Save the altered csproj and ofcourse reload in Visual Studio if you have the project loaded.

The inclusion of the UICulture will result (when the application is built) in a folder en-GB (in my case) with a single satellite assembly created, named Localize1.resources.dll.

Next we’re going to use msbuild to generate Uid’s for our controls. So from the command line in the project folder of your application run

msbuild /t:updateuid Localize1.csproj

Obviously replace the project name with your own. This should generate Uid’s for controls within your XAML files. They’re not very descriptive, i.e. Grid_1, TextBlock_1 etc. but we’ll stick with following the “How To” for now. Ofcourse you can implement your own Uid’s and either use msbuild /t:updateuid to generate any missing Uid’s or ignore them – and have Uid’s for those controls you wish to localize only.

We can also verify that Uid’s exist for our controls by running

msbuild /t:checkuid Localize1.csproj

At this point we’ve generated Uid’s for our controls and msbuild generated as part of the compilation a resource DLL for the culture we assigned to the project file.

We now need to look at generating alternate language resource.

How to create an alternate language resource

We need to download the LocBaml tool or it’s source. I had problems locating this but luckily found source on github here.

So if you don’t have LocBaml already, download and build the source and drop the locbaml.exe into your bin\debug folder. Now run the following command

locbaml.exe /parse en-GB/Localize1.resources.dll /out:Localize1.csv

You could ofcourse copy the locbaml.exe to the en-GB folder in my example if you prefer. What we’re after is for locbaml to generate our Localize1.csv file, which will be then add translated text to.

Here’s what my csv file looks like

Localize1.g.en-GB.resources:mainwindow.baml,Window_1:Localize1.MainWindow.$Content,None,True,True,,#Grid_1;
Localize1.g.en-GB.resources:mainwindow.baml,Window_1:System.Windows.Window.Title,Title,True,True,,MainWindow
Localize1.g.en-GB.resources:mainwindow.baml,Window_1:System.Windows.FrameworkElement.Height,None,False,True,,350
Localize1.g.en-GB.resources:mainwindow.baml,Window_1:System.Windows.FrameworkElement.Width,None,False,True,,525
Localize1.g.en-GB.resources:mainwindow.baml,TextBlock_1:System.Windows.Controls.TextBlock.$Content,Text,True,True,,Hello

If you view this csv in Excel you’ll see 7 columns. These are in the following order (decriptions copied from the How To document)

  • BAML Name. The name of the BAML resource with respect to the source language satellite assembly.
  • Resource Key. The localized resource identifier.
  • Category. The value type.
  • Readability. Whether the value can be read by a localizer.
  • Modifiability. Whether the value can be modified by a localizer.
  • Comments. Additional description of the value to help determine how a value is localized.
  • Value. The text value to translate to the desired culture.

For our translators (if we’re using an external translator to localize our applications) we might wish to supply comments regarding expectations or context for the item to be localized.

So, go ahead and translate the string Hello to an alternate culture, I’m going to change it to Bonjour. Once completed, save the csv as Localize1_fr-FR.csv (or at least in my case translating to French).

Now we want to get locbaml to generate our new satellite assembly for the French language resources, so again from the Debug folder (where you should have the generated csv from the original set of resources as well as the new fr-FR file) create a folder named fr-FR (or whatever your new culture is).

Run the locbaml command

locbaml.exe /generate en-GB/Localize1.resources.dll /trans:Localize1_fr-FR.csv /out:fr-FR /cul:fr-FR

This will generate a new .resource.dll based upon the Localize1.resource.dll but using our translated text (as specified in the file Localize1_fr-FR.csv). The new DLL will be written to the fr-FR folder.

Testing our translations

So now let’s see if everything worked by testing our translations in our application.

The easiest way to do this is to edit the App.xaml.cs and if it doesn’t have a constructor, then add one which should look like this

public App()
{
   CultureInfo ci = new CultureInfo("fr-FR");
   Thread.CurrentThread.CurrentCulture = ci;
   Thread.CurrentThread.CurrentUICulture = ci;
}

you’ll obviously requires the following using clauses as well

using System.Globalization;
using System.Threading;

We’re basically forcing our application to use fr-FR by default when it starts. If all went well, you should see the TextBlock with the text Bonjour.

Now change the Culture to one which you have not generated a set of resources for, i.e. in my case I support en-GB and fr-FR, so switching to en-US and running the application will have an undesirable affect, i.e. an IOException occurs, with additional information “Cannot locate resource ‘mainwindow.xaml'”. This is not very helpful, but basically means we do not have a “fallback” or “neutral language” resource.

Setting a fallback/neutral language resource

We, obviously don’t want to have to create resource files for every possible culture. What we need is to have a fallback or neutral language resource which is used when a culture is not supported via a translation DLL. To achieve this, open AssemblyInfo.cs and locate the commented out line which includes NeutralResourcesLanguage or just add the following either way

[assembly: NeutralResourcesLanguage("en-GB", UltimateResourceFallbackLocation.Satellite)]

obviously replace the eb-GB with your preferred default language. Run the application again and no IOException should occur and the default resources en-GB are used.

What about strings in code?

Well as the name suggests, locbaml is really localizing our BAML and when our WPF application starts up, in essence it loads our XAML with the one’s stored in the resource DLL.

So the string that we’ve embedded in the MainWindow.xaml, is not separated from the XAML (i.e. it’s embedded within the TextBlock itself). So we need to move our strings into a shared ResourceDictionary file and reference them from the UI XAML. For example in our App.xaml let’s add

<ResourceDictionary>
   <system:String x:Uid="Header_1" x:Key="Header">TBC</system:String>
</ResourceDictionary>

Now, change our MainWindow.xaml to

<TextBlock Text="{StaticResource Header}" />

This allows us to use FindResource to get at the string resource using the standard WPF FindResource method, as per

var headerString = Application.Current.FindResource("Header");

This appears to be the only “easy” way I’ve found of accessing resources and requires the resource key, not the Uid. This is obviously not great (if it is the only mechanism) as it then requires that we maintain both Uid and Key on each string, control etc. However if we ensure strings are stored as string resources then this probably isn’t too much of a headache.

References

https://msdn.microsoft.com/en-us/library/ms745650.aspx
https://msdn.microsoft.com/en-us/library/ms788718(v=vs.110).aspx

Localizing a WPF application using dynamic resources

There are several options for localizating a WPF application. We can use resx files, locbaml to create resource DLL’s for us, or why not just use the same technique used in theme’s, i.e. DynamicResource and ResourceDictionary’s.

In this post I’m going to look at the DynamicResource and ResourceDictionary approach to localization. Although this technique can obviously be used with images etc., we’ll concentrate on dealing with strings, which usually are the main area of localization.

Let’s start with some code

Create a simple WPF application which will use the standard DynamicResource to set text on controls. We will create a “default” set of string resources to allow us to develop our initial application with and we will create two satellite assemblies which will contain the same string resources for en-GB and en-US resources.

  • Create a WPF Application
  • Create a class library named Resources_en-GB and another class library named Resources_en-US
  • Add the references PresnetationCore, PresentationFramework, WindowsBase and System.Xaml to these class libraries
  • Change the class library output folders for Debug and Release to match those for the WPF application so the DLL’s will be built to the same folder as the application

Now in the WPF application, add a ResourceDictionary, mine’s named Strings.xaml and this will act as our default/design-time dictionary, here’s mine

<ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
                    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
                    xmlns:system="clr-namespace:System;assembly=mscorlib">

    <system:String x:Key="Header">TBC</system:String>
    
</ResourceDictionary>

and my MainWindow.xaml looks like this

<Window x:Class="WpfLocalizable.MainWindow"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
        xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
        xmlns:local="clr-namespace:WpfLocalizable"
        mc:Ignorable="d"
        Title="MainWindow" Height="350" Width="525">
    <StackPanel>
        <TextBlock Text="{DynamicResource Header}" FontSize="24"/>
        <StackPanel Orientation="Horizontal" VerticalAlignment="Bottom">
            <Button Content="US" Width="100" Margin="10" />
            <Button Content="GB" Width="100" Margin="10" />
        </StackPanel>
    </StackPanel>
</Window>

I didn’t say this was going to be exciting, did I?

Now if you run the application as it currently stands, you’ll see the string TBC – our design-time string.

Next, copy the Strings.xaml file from the application to both the Resources_en-GB and Resources_en-US and change the string text to represent your GB and US strings for header – I used the word Colour in GB and Color in US – just to demonstrate the common language differences.

Now if you build and run the application, you’ll see the default header text still, so we now need to make the application set the resources at start-up and allow us to easily switch them. So change the buttons in MainWindow.xaml to these

<Button Content="US" Width="100" Margin="10" Click="US_OnClick"/>
<Button Content="GB" Width="100" Margin="10" Click="GB_OnClick"/>

We’re going to simply use code behind for changing the resource in this demo. So in MainWindow.xaml.cs add the following code

private void LoadStringResource(string locale)
{
   var resources = new ResourceDictionary();

   resources.Source = new Uri("pack://application:,,,/Resources_" + locale + ";component/Strings.xaml", UriKind.Absolute);

   var current = Application.Current.Resources.MergedDictionaries.FirstOrDefault(
                    m => m.Source.OriginalString.EndsWith("Strings.xaml"));


   if (current != null)
   {
      Application.Current.Resources.MergedDictionaries.Remove(current);
   }

   Application.Current.Resources.MergedDictionaries.Add(resources);
}

private void US_OnClick(object sender, RoutedEventArgs e)
{
   LoadStringResource("en-US");
}

private void GB_OnClick(object sender, RoutedEventArgs e)
{
   LoadStringResource("en-GB");
}

and finally in the constructor let’s default to en-GB, so simply add this line after the InitializeComponent

LoadStringResource("en-GB");

Now run the application, be default you should see en-GB strings and then press the US button to see the en-US version etc.

Finishing touches

In some situations we might want to switch the languages strings used via an option (very useful when debugging but also in you’re natural language is not the same as the default on your machine). In most cases, we’re likely to want to switch the language at start-up to match the machines’s culture/language.

Using ResourceDictionary might look a little more complex than CSV files, but should be easy for your translators to use and being, ultimately, XML – we could ofcourse write a simple application to allow the translators to view strings etc. in a tabular format.

We can deploy as many or as few localized resources as we need on a machine.

References

Checkout this a useful document WPF Globalization and Localization Overview from Microsoft.