Author Archives: purpleblob

Type conversions in C#

Converting one type to another

All of the primitive types, such as Int32, Boolean, String etc. implement the IConvertible interface. This means we can easily change one type to another by using

float f = (float)Convert.ChangeType("100", typeof(float));

The thing to note regarding the IConvertible type is that it’s one way, i.e. from your type which implements the IConvertible to another type, but not back (this is where the class TypeConverter, which we’ll discuss next, comes into play).

So let’s look at a simple example which converts a Point to a string, and yes before I show the code for implementing IConvertible, we could have simply overridden the ToString method (which I shall also show in the sample code).

First off let’s create a couple of tests to prove our code works. The first takes a Point and using IConvertible, will generate a string representation of the type. As it uses ToString there’s no surprise that the second test which uses the ToString method will produce the same output.

[Fact]
public void ChangeTypePointToString()
{
   Point p = new Point { X = 100, Y = 200 };
   string s = (string)Convert.ChangeType(p, typeof(string));

   Assert.Equal("(100,200)", s);
}

[Fact]
public void PointToString()
{
   Point p = new Point { X = 100, Y = 200 };

   Assert.Equal("(100,200)", p.ToString());
}

Now let’s look at our Point type, with an overridden ToString method

public struct Point : IConvertible
{
   public int X { get; set; }
   public int Y { get; set; }

   public override string ToString()
   {
      return String.Format("({0},{1})", X, Y);
   }

   // ... IConvertible methods
}

and now let’s look at a possible implementation of the IConvertible

TypeCode IConvertible.GetTypeCode()
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

bool IConvertible.ToBoolean(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

byte IConvertible.ToByte(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

char IConvertible.ToChar(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

DateTime IConvertible.ToDateTime(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

decimal IConvertible.ToDecimal(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

double IConvertible.ToDouble(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

short IConvertible.ToInt16(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

int IConvertible.ToInt32(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

long IConvertible.ToInt64(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

sbyte IConvertible.ToSByte(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

float IConvertible.ToSingle(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

string IConvertible.ToString(IFormatProvider provider)
{
   return ToString();
}

object IConvertible.ToType(Type conversionType, IFormatProvider provider)
{
   if(conversionType == typeof(string))
      return ToString();

   throw new InvalidCastException("The method or operation is not implemented.");
}

ushort IConvertible.ToUInt16(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

uint IConvertible.ToUInt32(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

ulong IConvertible.ToUInt64(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

TypeConverters

As mentioned previously, the IConvertible allows us to convert a type to one of the primitive types, but what if we want more complex capabilities, converting to and from various types. This is where the TypeConverter class comes in.

Here we develop our type as normal and then we adorn it with the TypeConverterAttribute at the struct/class level. The attribute takes a type derived from the TypeConverter class. This TypeConverter derived class does the actual type conversion to and from our adorned type.

Let’s again create a Point struct to demonstrate this on

[TypeConverter(typeof(PointTypeConverter))]
public struct Point
{
   public int X { get; set; }
   public int Y { get; set; }
}

Note: We can also declare the TypeConverter type using a string in the standard Type, Assembly format, i.e. [TypeConverter(“MyTypeConverters.PointTypeConverter, MyTypeConverters)]
if we wanted to reference the type in an external assembly.

Before we create the TypeConverter code, let’s take a look at some tests which hopefully demonstrate how we use the TypeConverter and what we expect from our conversion code.

[Fact]
public void CanConvertPointToString()
{
   TypeConverter tc = TypeDescriptor.GetConverter(typeof(Point));

   Assert.True(tc.CanConvertTo(typeof(string)));
}

[Fact]
public void ConvertPointToString()
{
   Point p = new Point { X = 100, Y = 200 };

   TypeConverter tc = TypeDescriptor.GetConverter(typeof(Point));

   Assert.Equal("(100,200)", tc.ConvertTo(p, typeof(string)));
}

[Fact]
public void CanConvertStringToPoint()
{
   TypeConverter tc = TypeDescriptor.GetConverter(typeof(Point));

   Assert.True(tc.CanConvertFrom(typeof(string)));
}

[Fact]
public void ConvertStringToPoint()
{
   TypeConverter tc = TypeDescriptor.GetConverter(typeof(Point));

   Point p = (Point)tc.ConvertFrom("(100,200)");
   Assert.Equal(100, p.X);
   Assert.Equal(200, p.Y);
}

So as you can see, to get the TypeConverter for our class we call the static method GetConverter on the TypeDescriptor class. This returns an instance of our TypeConverter (in this case our PointTypeConverter). From this we can check whether the type converter can convert to on from a type and then using the ConvertTo or ConvertFrom methods on the TypeConverter we can convert the type.

The tests above show that we expect to be able to convert a Point to a string where the string takes the format “(X,Y)”. So let’s look at an implementation for this

Note: note, this is an example of how we might implement this code and does not have full error handling, but hopefully gives a basic idea of what you might implement.

public class PointTypeConverter : TypeConverter
{
   public override bool CanConvertTo(ITypeDescriptorContext context, 
            Type destinationType)
   {
      return (destinationType == typeof(string)) || 
         base.CanConvertTo(context, destinationType);
   }

   public override object ConvertTo(ITypeDescriptorContext context, 
            CultureInfo culture, 
            object value, 
            Type destinationType)
   {
      if (destinationType == typeof(string))
      {
         Point pt = (Point)value;
         return String.Format("({0},{1})", pt.X, pt.Y);
       }
       return base.ConvertTo(context, culture, value, destinationType);
   }

   public override bool CanConvertFrom(ITypeDescriptorContext context, 
            Type sourceType)
   {
      return (sourceType == typeof(string)) ||
         base.CanConvertFrom(context, sourceType);
   }

   public override object ConvertFrom(ITypeDescriptorContext context, 
            CultureInfo culture, 
            object value)
   {
      string s = value as string;
      if (s != null)
      {
         s = s.Trim();

         if(s.StartsWith("(") && s.EndsWith(")"))
         {
            s = s.Substring(1, s.Length - 2);

            string[] parts = s.Split(',');
            if (parts != null && parts.Length == 2)
            {
               Point pt = new Point();
               pt.X = Convert.ToInt32(parts[0]);
               pt.Y = Convert.ToInt32(parts[1]);
               return pt;
            }
         }
      }
      return base.ConvertFrom(context, culture, value);
   }
}

Getting started with T4 templates

What are T4 templates

First off, T4 stands for Text Template Transformation Toolkit. It allows us to embed C# (or VB.NET) code into documents and optionally combine the code with plain text, which can then be used to generate new documents – therefore similar to the way ASP or PHP engines work when producing HTML. The output text document might be a C# file, HTML file or pretty much any other file type you want.

A T4 template file ends in a .tt extension.

Within Visual, when a T4 template is executed the resultant file is created as a child node (in the solution explorer) off of the template’s node.

Let’s get started

If you create a solution (or have a solution ready), then on a selected project, select the “Add New Item” option. Then select “Text Template”, give it a name (mine’s named Test.tt) and add to the project.

By default the file will look similar to the following (this is from the Visual Studio 2012/2013 T4 item template)

<#@ template debug="false" hostspecific="false" language="C#" #>
<#@ assembly name="System.Core" #>
<#@ import namespace="System.Linq" #>
<#@ import namespace="System.Text" #>
<#@ import namespace="System.Collections.Generic" #>
<#@ output extension=".txt" #>

Note: by default Visual Studio (well 2013 at least) does not offer syntax highlighting. You can use Tools | Extensions and Updates to add T4 Toolbox which is a free syntax highlighter etc. for T4.

So in the template code above we can see that <#@ #> are used to enclose “directives”.

The template adds a reference to the System.Core assembly and imported the namespaces System.Linq, System.Text and System.Collections.Generic. The template sets the output extension (by default) to .txt. Hence the template will generate a file name Test.txt file (in my case). Obviously the name of the file is a copy of the name of your template file.

The template directive is described here.

We can see that the language attribute denotes whether we want to write VB or C# code within our code statement blocks.

The hostspecific attribute denotes that we want the T4 template to allow us access to the this.Host property of type ITextTemplatingEngineHost. From this we can use methods such as the ResolvePath method, which allows us to get a combined path of our supplied filename or relative path, combined with the current path of the solution. See ITextTemplatingEngineHost Interface for more information.

Finally the debug attribute denotes that we want the intermediate code to include code which helps the debugger to identify more accurately where an exception or the likes has occurred.

Include Directive

We can also include T4 code using the <#@ include file=”mycode.tt” #> See T4 Include Directive.

Blocks

So we’ve looked at the <#@ #> directive, but there’s more…

<# Statement Blocks #>

Statement blocks allow you to embed code into your template. This code can simply be blocks of code or can be used to wrap loops or if statements which in turn wrap around output text, as per

<#
for(int i = 0; i < 3; i++)
{
#>
   Hello
<#
}
#>

or, as mentioned, simply embedding code blocks, such as

<#
for(int i = 0; i < 3; i++)
{
   WriteLine("Hello");
}
#>

which results in the same output at the previous sample. See also Statement Syntax.

<#+ Class Feature Blocks #>

Class feature blocks allow you to create code which you can reuse within your template. Statement blocks are like the inner blocks of a method, but ofcourse it would be useful to be able to actually create reusable methods. This is where Class Blocks come in.

For example

<#= WrapInComment("Hello") #>

<#+
private string WrapInComment(string text)
{
   return "<!-- " + text + " -->";
}
#>

It’s important to note that the statement block calling code is actually before the method declaration.

In this example we return a string, which can be used from a <#= Expression Block #> (see below). If the method returned void then it would equally be callable from a <# Statement Blocks #>.

<#= Expression Blocks #>

Expression blocks allow us to write out results from our code to the generated output text, for example

<#
int sum = 10 + 4;
#>

<#= sum #>

So the first block is a statement block where we declare the sum variable and it is output in situ via the <#= #> code. See also Expression Syntax (Domain-Specific Languages).

Great, now let’s do something with all this!

Let’s write a T4 template which will take a C# class, which contains our model (as a partial class) and has properties marked with XmlElementAttribute and then generates another partial class for the type, with some new ShouldSerializeXXX methods – the idea being any string property which is empty or null, will not be serialized.

This example template is not perfect and I’m sure there are better ways to do things, but I really wanted to create a more meaningful example than a “Hello World”, so bare with me.

<#@ template debug="false" hostspecific="true" language="C#" #>
<#@ assembly name="System" #>
<#@ assembly name="System.Xml" #>
<#@ import namespace="System.IO" #>
<#@ import namespace="System.Text" #>
<#@ import namespace="System.Collections.Generic" #>
<#@ import namespace="System.Xml.Serialization" #>
<#@ import namespace="System.Reflection" #>
<#@ import namespace="System.CodeDom.Compiler" #>
<#@ output extension=".cs" #>
<#
string model = File.ReadAllText(this.Host.ResolvePath("MyModel.cs"));

CodeDomProvider provider = CodeDomProvider.CreateProvider("CSharp");
CompilerParameters cp = new CompilerParameters();
cp.GenerateExecutable = false;
cp.GenerateInMemory = true;
cp.ReferencedAssemblies.Add("System.dll");
cp.ReferencedAssemblies.Add("System.Core.dll");
cp.ReferencedAssemblies.Add("System.Xml.dll");

CompilerResults cr = provider.CompileAssemblyFromSource(cp, new string[] { model });
if (cr.Errors.Count > 0)
{
   Error("Errors detected in the compiled model");
}
else
{
#>
using System;

<#
    bool nsAdded = false;

    Type[] types = cr.CompiledAssembly.GetTypes();
	foreach (Type type in types)
	{
        // time to generate code
        if(nsAdded == false)
        {
#>
namespace <#=type.Namespace#>
{
<#
        }
        nsAdded = true;
#>
    public partial class <#=type.Name#>
    {
<#
		List<PropertyInfo> toGenerate = new List<PropertyInfo>();

		foreach (PropertyInfo pi in type.GetProperties())
		{
		    if(pi.PropertyType == typeof(string))
			{
			    XmlElementAttribute[] a = (XmlElementAttribute[])pi.GetCustomAttributes(typeof(XmlElementAttribute), true);
				if (a != null && a.Length > 0)
				{
#>
        public bool ShouldSerialize<#=pi.Name#>()
        {					
            return !String.IsNullOrEmpty(<#=pi.Name#>);
        }
<#
                }
            }
		}
#>
    }

<#
        }
    }
#>
}

Before we look at the output from this T4 template, let’s quickly review what’s in this template. Firstly we reference the two assembles, System and System.Xml then add our “using” clauses via the import directive. The template is marked as hostspecific so we can use the Host property.

We read the MyModel.cs file, in this instance. Although we could have possibly looked to interact with the Visual Studio environment instead to achieve this. Then we create a C# CodeDomProvider, we could use Roslyn instead for this. The purpose of the CodeDomProvider is solely to use it to compile the MyModel.cs and allow us to use reflection to get the properties with the XmlElementAttribute as I don’t really want to parse the source code myself (this is where Roslyn would have come in).

Now you can see interspersed with our T4 blocks of code is the output text which creates the namespace, class and ShouldSerializeXXX methods.

So the code gets the properties on our MyModel object, then finds those with a string type and also with the XmlElementAttribute attribute applied and then creates the namespace, class and methods to match these properties. Writing output which looks something like the following

using System;

namespace DomainObjects
{
    public partial class MyModel
    {
        public bool ShouldSerializeName()
        {					
            return !String.IsNullOrEmpty(Name);
        }
        public bool ShouldSerializeAge()
        {					
            return !String.IsNullOrEmpty(Age);
        }
        public bool ShouldSerializeAddress()
        {					
            return !String.IsNullOrEmpty(Address);
        }
   }
}

And finally (for now)…

Right mouse click on the T4 .tt file and you will see a Debug T4 template. I actually only discovered this after searching for something else related to this post and came across T4 TEMPLATE DEBUGGING which talks about it.

You can now put break points against your T4 code and run the T4 debugger!

How to, conditionally, stop XML serializing properties

Let’s assume we have this simple C# class which represents some XML data (i.e. it’s serialized to XML eventually)

[XmlType(AnonymousType = true)]
public partial class Employee
{
   [XmlAttribute(AttributeName = "id")]
   public string Id { get; set; }

   [XmlAttribute(AttributeName = "name")]
   public string Name { get; set; }

   [XmlAttribute(AttributeName = "age")]
   public int Age { get; set; }
}

Under certain circumstances we may prefer to not include elements in the XML if the values are not suitable.

We could handle this is a simplistic manner by setting a DefaultValueAttribute on a property and obviously the data will not be serialized unless the value differs from the default, but this is not so useful if you wanted more complex functionality to decide whether a value should be serialized or not, for example what if we don’t want to serialize Age if it’s less than 1 or greater than 100. Or not serialize Name if it’s empty, null or the string length is less than 3 characters and so on.

ShouldSerializeXXX

Note: You should not use ShouldSerializeXXX method and the DefaultValueAttribute on the same property

So, we can now achieve this more complex logic using the ShouldSerializeXXX method. If we create a partial class (shown below) and add the ShouldSerializName method we can tell the serializer to not bother serializing the Name property under these more complex circumstances

public partial class Employee
{
   public bool ShouldSerializeName()
   {
      return !String.IsNullOrEmpty(Name) || Name.Length < 3;
   }
}

When serializing this data the methods are called by the serializer to determine whether a property should be serialized and obviously if it should not be, then the element/attribute will not get added to the XML.

Mongodb Replication & Recovery

In a non-trivial system, we’d normally look to have three types of database set-up. A primary would be set-up as a writeable database, one or more secondary databases would be set-up as readonly databases and finally and arbiter is set-up to be used to help decide which secondary database takes over in the case of the primary database going down.

Note: An arbiter is added to stop tied votes when deciding a secondary to take over as primary and thus should only be used where an even number of instances of mongodb exists in a replication set.

The secondary databases will be “eventually consistent” in that when data is written to the primary database it is not immediately replicated to the secondary databases, but will “eventually” be replicated.

Let’s look at an example replication set…

To set-up a replication set, we would start with a minimum of three instances of, or machines running, mongodb. As previously mentioned, this replication set would consist of a primary and secondary database and arbiter.

Let’s run three instances on a single machine to begin with, so we need to create three database folders, foe example

mkdir MyData\database1
mkdir MyData\database2
mkdir MyData\database3

Obviously, if all three are running on the same machine, we need to give the mongodb instances their own ports, for example run the following commands each in their own command prompt

mongod --dbpath /MyData\database1 --port 30000 --replSet "sample"
mongod --dbpath /MyData\database2 --port 40000 --replSet "sample"
mongod --dbpath /MyData\database3 --port 50000 --replSet "sample"

“sample” denotes a arbitrary, user-defined name for our replication set. However the replication set still hasn’t been created at this point. We instead need to run the shell against one of the servers, for example

Note: the sample above, showing all databases on the same machine is solely an example, obviously no production system should implement this strategy, each instance of primary, secondary and arbiter, should be run on it’s own machine.

mongo --port 30000

Now we need to create the configuration for our replication set, for example

var sampleConfiguration =
{ _id : "sample", 
   members : [
     {_id : 0, host : 'localhost:30000', priority : 10 },
     {_id : 1, host : 'localhost:40000'},
     {_id : 2, host : 'localhost:50000', arbiterOnly : true } 
   ]
}

This sets up the replication set, stating the host on port 300000 is the primary (due to it’s priority being set, in this example). The host on port 40000 doesn’t have a priority (or abiterOnly) so this is the secondary and finally we have the arbiter.

At this point we’ve created the configuration but we still need to actually initiate/run the configuration. So, again, from the shell we write

rs.initiate(sampleConfiguration)

Note: This will take a few minutes to configure all the instances which make up the replication set. Eventually the shell will return from initiate call and should say “ok”.

The shell prompt should now change to show the replication set name of the currently connected server (i.e. PRIMARY).

Now if we write data to the primary it will “eventually” be replicated to all secondary databases.

If we take the primary database offline (or worse still a fault occurs and it’s taken offline without our involvement) a secondary database will be promoted to become the primary database (obviously in our example we only have one secondary, so this will take over as the primary). If/when the original primary comes back online, it will again become the primary database and the secondary will, of course, return to being a secondary database.

Don’t forget you can use

rs.help()

to view help for the various replication commands.

Entity Framework – lazy & eager loading

By default Entity Framework will lazy load any related entities. If you’ve not come across Lazy Loading before it’s basically coding something in such a way that either the item is not retrieved and/or not created until you actually want to use it. For example, the code below shows the AlternateNames list is not instantiated until you call the property.

public class Plant
{
   private IList<AlternateName> alternateNames;

   public virtual IList<AlternateName> AlternateNames
   {
      get
      {
         return alternateNames ?? new List<AlternateName>();
      }
   }
}

So as you can see from the example above we only create an instance of IList when the AlternateNames property is called.

As stated at the start of this post, by default Entity Framework defaults to lazy loading which is perfect in most scenarios, but let’s take one where it’s not…

If you are returning an instance of an object (like Plant above), AlternateNames is not loaded until it’s referenced, however if you were to pass the Plant object over the wire using something like WCF, AlternateNames will not get instantiated. The caller/client will try to access the AlternateNames property and of course it cannot now be loaded. What we need to do is ensure the object is fully loaded before passing it over the wire. To do this we need to Eager Load the data.

Eager Loading is the process of ensuring a lazy loaded object is fully loaded. In Entity Framework we achieve this using the Include method, thus

return context.Plants.Include("AlternateNames");

Comparing Moq and JustMock lite

This is not meant as a “which is best” post or even a feature blow by blow comparison but more a “I’m using JustMock lite (known henceforth as JML) how do I do this in Moq” or vice versa.

Please note, MOQ version 4.20 has introduced a SponsoreLink which appears to send data to some third party. See discussions on GitHub.

For this post I’m using Moq 4.2.1402.2112 and JML 20014.1.1424.1

For the code samples, I’m writing xUnit tests but I’m not necessarily going to write code to use the mocks but will instead call directly on the mocks to demonstrate solely how they would work. Such tests would obviously only really ensure the mocking framework worked as expected, but hopefully the ideas of the mocks usage is conveyed in as little code as possible.

Strict behaviour

By default both Moq and JML use loose behavior meaning simply that if we do not create any Setup or Arrange code for methods/properties being mocked, then the mocking framework will default them. When using strict behavior we are basically saying if a method or property is called on the mock object and we’ve not setup any behaviors for the mock object, then the mocking framework should fail – meaning we’ll get an exception from the mocking framework.

Following is an example of using the strict behavior – removing the Setup/Arrange will cause a mocking framework exception, adding the Setup/Arrange will fulfill the strict behavior and allow the code to complete

Using Moq

Mock<IFeed> feed = new Mock<IFeed>(MockBehavior.Strict);

feed.Setup(f => f.GetTitle()).Returns("");

feed.Object.GetTitle();

Using JML

IFeed feed = Mock.Create<IFeed>(Behavior.Strict);

Mock.Arrange(() => feed.GetTitle()).Returns("");

feed.GetTitle();

Removing the MockBehavior.Strict/Behavior.Strict from the mock calls will switch to loose behaviors.

Ensuring a mocked method/property is called n times

Occasionally we want to ensure that a method/property is called a specified number of times only, for example, once, at least n times, at most n etc.

Using Moq

Mock<IFeed> feed = new Mock<IFeed>();

feed.Setup(f => f.GetTitle()).Returns("");

feed.Object.GetTitle();

feed.Verify(f => f.GetTitle(), Times.Once);

Using JML

IFeed feed = Mock.Create<IFeed>();

Mock.Arrange(() => feed.GetTitle()).Returns("").OccursOnce();

feed.GetTitle();

Mock.Assert(feed);

In both examples we could change OccursOnce()/Times.Once to OccursNever()/Times.Never or Occurs(2)/Times.Exactly(2) and so on.

Throwing exceptions

On occasion we may want to mock an exception, maybe our IFeed throws a WebException if it cannot download data from a website, we want to simulate this on our mock object -then we can use the following

Using Moq

Mock<IFeed> feed = new Mock<IFeed>();

feed.Setup(f => f.Download()).Throws<WebException>();

Assert.Throws<WebException>(() => feed.Object.Download());

feed.Verify();

Using JML

IFeed feed = Mock.Create<IFeed>();

Mock.Arrange(() => feed.Download()).Throws<WebException>();

Assert.Throws<WebException>(() => feed.Download());

Mock.Assert(feed);

Supporting multiple interfaces

Occasionally we might be mocking an interface, such as IFeed but our application will check if the IFeed object also supports IDataErrorInfo (for example) and handle the code accordingly. So, without actually changing the IFeed what we would expect is a concrete class which implements both interfaces.

Using Moq

Mock<IFeed> feed = new Mock<IFeed>();
feed.As<IDataErrorInfo>();

Assert.IsAssignableFrom(typeof(IDataErrorInfo), feed.Object);

Using JML

IFeed feed = Mock.Create<IFeed>(r => r.Implements<IDataErrorInfo>());

Assert.IsAssignableFrom(typeof(IDataErrorInfo), feed);

As you can see, we add interfaces to our mock in Moq by using the As method and in JML using the Implements method, we can change these methods together to also add further interfaces to our mock as per

Using Moq

Mock<IFeed> feed = new Mock<IFeed>();
feed.As<IDataErrorInfo>().
     As<INotifyPropertyChanged>();

Assert.IsAssignableFrom(typeof(IDataErrorInfo), feed.Object);
Assert.IsAssignableFrom(typeof(INotifyPropertyChanged), feed.Object);

Using JML

IFeed feed = Mock.Create<IFeed>(r => 
   r.Implements<IDataErrorInfo>().
     Implements<INotifyPropertyChanged>());

Assert.IsAssignableFrom(typeof(IDataErrorInfo), feed);
Assert.IsAssignableFrom(typeof(INotifyPropertyChanged), feed);

Automocking

One of the biggest problems when unit testing using mocks is when a system under test (SUT) requires many parts to be mocked and setup, or if the code for the SUT changes often, requiring refactoring of tests to simply add/change etc. the mock objects used.

As you’ve already seen with Loose behavior we can get around the need to setup every single bit of code and thus concentrate our tests on specific areas without creating a thousand and one mock’s and setup/arrange sections of code. But in a possibly ever changing SUT it would be good if we didn’t need to continually add/remove mocks which we might not be testing against.

What would be nice is if the mocking framework could work like an IoC system and automatically inject the mocks for us – this is basically what auto mocking is about.

So if we look at the code below, assume for a moment that initially the code didn’t include IProxySettings, we write our IFeedList mock and write the code to test the RssReader, now we add a new interface IProxySettings and now we need to alter the tests to include this interface even though our current test code doesn’t need it. Ofcourse with the addition of a single interface this may seem to be a little over the top, however it can easily get a lot worse.

So here’s the code…

System under test and service code

public interface IFeedList
{
   string Download();
}

public interface IProxySettings
{		
}

public class RssReader
{
   private IFeedList feeds;
   private IProxySettings settings;

   public RssReader(IProxySettings settings, IFeedList feeds)
   {
      this.settings = settings;
      this.feeds = feeds;
   }

   public string Download()
   {
      return feeds.Download();
   }
}

Now when the auto mocking container mocks the RssReader, it will automatically inject mocks for the two interfaces, then it’s up to our test code to setup or arrange expectations etc. on it.

Using Moq

Unlike the code you will see (further below) for JML, Moq doesn’t come with a auto mock container by default (JML NuGet’s package will add the Telerik.JustMock.Container by default). Instead Moq appears to have several auto mocking containers created for use with it by the community at large. I’m going to concentrate on Moq.Contrib which includes the AutoMockContainer class.

MockRepository repos = new MockRepository(MockBehavior.Loose);
AutoMockContainer container = new AutoMockContainer(repos);

RssReader rss = container.Create<RssReader>();

container.GetMock<IFeedList>().Setup(f => f.Download()).Returns("Data");

Assert.Equal("Data", rss.Download());

repos.VerifyAll();

Using JML

var container = new MockingContainer<RssReader>();

container.Arrange<IFeedList>(f => f.Download()).Returns("Data");

Assert.Equal("Data", container.Instance.Download());

container.AssertAll();

In both cases the auto mock container created our RssReader, mocking the interfaces passed to it.

That’s it for now, I’ll add further comparisons as and when I get time.

Getting started with Linq Expressions

The Expression class is used to represent expression trees and is seen in use within LINQ. If you’ve been creating your own LINQ provider you’ll also have come across Expressions. For example see my post Creating a custom Linq Provider on this subject.

Getting started with the Expression class

Expression objects can be used in various situations…

Let’s start by looking at using Expressions to represent lambda expressions.

Expression<Func<bool>> e = () => a < b;

In the above we declare an Expression which takes a Func which takes no arguments and returns a Boolean. On the right hand side of the assignment operator we can see an equivalent lambda expression, i.e. one which takes no arguments and returns a Boolean.

From this Expression we can then get at the function within the Expression by calling the Compile method thus

Func<bool> f = e.Compile();

We could also create the same lambda expression using the Expression’s methods. For example

ConstantExpression lParam = Expression.Constant(a, typeof(int));
ConstantExpression rParam = Expression.Constant(b, typeof(int));
BinaryExpression lessThan = Expression.LessThan(lParam, rParam);
Expression<Func<bool>> e = Expression.Lambda<Func<bool>>(lessThan);

This probably doesn’t seem very exciting in itself, but if we can create an Expression from a lambda then we can also deconstruct an lambda into an Expression tree. So in the previous lambda example we could look at the left and right side of the a < b expression and find the types and other such things, we could evaluate the parts or simply traverse the expression and create database for it, but that’s a subject beyond this post.

An alternate use

An interesting use of Expressions can be found in many MVVM base classes (or the likes). I therefore take absolutely no credit for the idea.

The scenario is this. We want to create a base class for handling the INotifyPropertyChanged interface, it will look like this

public class PropertyChangedObject : INotifyPropertyChanged
{
   public event PropertyChangedEventHandler PropertyChanged;

   public void OnPropertyChanged(string propertyName)
   {
      if (PropertyChanged != null)
      {
         PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
      }
   }
}

Next let’s write a simple class to use this, such as

public class MyObject : PropertyChangedObject
{
   private string name;

   public string Name
   {
      get { return name; }
      set
      {
         if (name != value)
         {
            name = value;
            OnPropertyChanged("Name");
         }
      }
   }
}

As you can see, within the setter, we need to check whether the value stored in the Name property is different to the new value passed to it and if so, update the backing field and then raise a property changed event passing a string to represent the property name.

An obvious problem with this approach is that “magic strings” can sometimes be incorrect (i.e. spelling mistakes or the likes). So it would be nicer if we could somehow pass the property name in a more typesafe and compile time checking way. It would also be nice to wrap the whole if block in an extension method which we can reuse in all the setters on our object.

Note: before we go much further with this, in .NET 4.5 there’s a better way to implement this code. See my post on the CallerMemberNameAttribute attribute.

So one way we could pass the property name, which at least ensure the property exists at compile time, is to use an Expression object which will then include all the information we need (and more).

Here’s what we want the setter code to look like

public string Name
{
   get { return name; }
   set { this.RaiseIfPropertyChanged(p => p.Name, ref name, value); }
}

The second and third arguments are self-explanatory, but for the sake of completeness let’s review them – the second argument takes a reference to the backing field. this will be set to the value contained within the third field only if the two differ. At which point we expect an OnPropertyChanged call to be made and the PropertyChanged event to occur.

The first argument is the bit relevant to the topic of this post, i.e. the Expression class.

Let’s look at the extension method that implements this and then walk through it

public static void RaiseIfPropertyChanged<TModel, TValue>(this TModel po, 
         Expression<Func<TModel, TValue>> e,  
         ref TValue backingField, 
         TValue value) where 
            TModel : PropertyChangedObject
{
   if (!EqualityComparer<TValue>.Default.Equals(backingField, value))
   {
      var m = e.Body as MemberExpression;
      if(m != null)
      {
         backingField = value;
         po.OnPropertyChanged(m.Member.Name);
      }
   }
}

The method can be used on any type which inherits from PropertyChangedObject, this is obviously so we get the method call OnPropertyChanged.

We check the equality of the backingField and value and obviously, only if they’re different do we bother doing anything. Assuming the values are different we then get the Body of the expression as a MemberExpression, on this the Member.Name property will be a string representing the name of the property supplied in the calling property, i.e. in this example the property name “Name”.

So now when we use the RaiseIfPropertyChanged extension method we have a little more type safety, i.e. the property passed to the expression must be the same type as the backing field and value and ofcourse a mis-spelled/none existent property will fail to compile as well, so lessens the chances of “magic string” typos. Obviously if we passed another property of the same type into the Expression then this will compile and seemingly work but the OnPropertyChanged event would be passed an incorrect property string and this is where the CallerMemberNameAttribute would help us further.

Step by step mocking with RhinoMock

Requirements

We’re going to work through a simple set of scenarios/tests using RhinoMocks and NUnit. So we’ll need the following to get started

I’m using the RhinoMocks NuGet package, version 3.6.1 for these samples and the NuGet package for NUnit 2.6.3.

What are we going to mock

For the sake of these examples we’re going to have an interface, which we will mock out, named ISession. Here’s the code for ISession

public interface ISession
{
   /// <summary>
   /// Subscribe to session events
   /// </summary>
   /// <param name="o">the object that recieves event notification</param>
   /// <returns>a token representing the subscription object</returns>
   object Subscribe(object o);
   /// <summary>
   /// Unsubscribe from session events
   /// </summary>
   /// <param name="token">a token supplied via the Subscribe method</param>
   void Unsubscribe(object token);
   /// <summary>
   /// Executes a command against the session object
   /// </summary>
   /// <param name="command">the command to be executed</param>
   /// <returns>an object representing any return value for the given command</returns>
   object Execute(string command);
}

Getting started

Before we can create any mock objects we need to create an instance of the factory class or more specifically the MockRepository. A common pattern within NUnit test fixtures is to create this during the SetUp, for example

[TestFixture]
public class DemoTests
{
   private MockRepository repository;

   [SetUp]
   public void SetUp()
   {
      repository = new MockRepository();
   }
}

The MockRepository is used to create mock objects but also can be used to record, replay and verify mock objects (and more). We’ll take a look at some of these methods as we go through this step by step guide.

If, as is often the cases with tests I’ve written with RhinoMock, we want to verify all expectations on our mocks. Using NUnit we can handle this in the TearDown, for example adding the following to the DemoTests class

[TearDown]
public void TearDown()
{
   repository.VerifyAll();
}

this will verify that all mock expectations have been met when the test fixture is torn down.

Mocks vs DynamicMocks

In previous posts on Moq and JustMock Lite I’ve mentioned strict and loose behavior. Basically loose behavior on a mock object means I do not need to supply expectations for every method call, property invocation etc. on a mocked object, whereas strict means the opposite, in that if we do not supply the expectation an ExpectationViolationExpectation will be thrown by RhinoMocks.

In RhinoMock terminology, mocks have strict semantics whereas dynamic mocks have loose.

So, to see this in action let’s add two tests to our test fixture

[Test]
public void TestMock()
{
   ISession session = repository.CreateMock<ISession>();
   repository.ReplayAll();

   Mapper mapper = new Mapper(session);
}

[Test]
public void TestDynamic()
{
   ISession session = repository.DynamicMock<ISession>();
   repository.ReplayAll();

   Mapper mapper = new Mapper(session);
}

Our Mapper class looks like the following

public class Mapper
{
   private readonly ISession session;
   private object token;

   public Mapper(ISession session)
   {
      this.session = session;

      token = session.Subscribe(this);
   }
}

don’t worry about repository.ReplayAll(), we’ll get to that in a minute or two.

Now if we run these two tests TestDynamic will succeed whereas TestMock will fail with an ExpectationViolationException. The dynamic mock worked because of the loose semantics which means it does not require all expectations set before usage. We can fix the TestMock method by writing an expectation for the call to the Subscribe method on the ISession interface.

So changing the test to look like the following

[Test]
public void TestMock()
{
   ISession session = repository.CreateMock<ISession>();
   Expect.Call(session.Subscribe(null)).IgnoreArguments().Return(null).Repeat.Any();

   repository.ReplayAll();

   Mapper mapper = new Mapper(session);
}

So in the above code we arrange our expectations. Basically we’re saying expect a call on the Subscribe method of the session mock object. In this case we pass in null and tell the mock to ignore the arguments, removing IgnoreArguments means we expect that Mapper will call the Subscribe method passing the exact arguments supplied in the expectation, i.e. in this case null.

Next we’re setting the expectation to return null and we don’t care how many times this method is called, so we call Repeat.Any(). If we wish to change the expectation to ensure the method is called just the once, we can change this to Repeat.Once() which is obviously more specific and useful for catching scenarios where a method is accidentally called more times than necessary. In our Mapper’s case this cannot happen and the method can only be called once, so we’d normally set this to Repeat.Once().

What we’ve done is supply defaults which is what the dynamic mock object would have probably implemented for our expectations as well. Hence why I used Repeat.Any() to begin with, so the implementation above will now cause the test to succeed.

Record/Playback

Now to return to repository.ReplayAll(). RhinoMocks works in a record/playback way, that is by default it’s in record mode so if in TestDynamic we comment out repository.ReplayAll() we’ll get the exception InvalidOperationException. The mock object is in a record state. We arrange our expectation in the record phase then act upon them during playback. As we are, by default, in record mode we can simply start creating our expectations, then when we’re ready to act on those mocked objects we switch the MockRepository to playback mode using repository.ReplayAll().

Arrange

As already mentioned we need to setup expectations on our mock object (unless we’re using dynamic mocks ofcourse). We do this during the arrange phase as was shown with the line

Expect.Call(session.Subscribe(null)).IgnoreArguments().Return(null).Repeat.Any();

One gotcha is if your method takes no arguments and returns void. So let’s assume ISession now has a method DoSomething which takes no arguments and returns void and see what happens…

Trying to write the following

Expect.Call(session.DoSomething()).Repeat.Any();

will fail to compile as we cannot convert from void to Rhino.Mocks.Expect.Action, we can easily fix by removing the () and using the following line

Expect.Call(session.DoSomething).Repeat.Any();

Equally if the ISession had a property named Result which was of type string we can declare the expectation as follows

Expect.Call(session.Result).Return("hello").Repeat.Any();

We can also setup an expectation on a method call using the following

session.Subscribe(null);
LastCall.IgnoreArguments().Return(null).Repeat.Any();

in this case the LastCall allows us to set our expectations on the previous method call, i.e. this is equivalent to our previous declaration for the expectation on the Subscribe method. This syntax is often used when dealing with event handlers.

Mocking Event Handlers

Let’s assume we have the following on the ISession

event EventHandler StatusChanged;

the idea being a session object may change, maybe to a disconnect state, and we want to have the Mapper respond to this in some way. Then we want to cause events to fire and then see whether the Mapper changes accordingly.

Okay, so let’s rewrite the Mapper constructor to look like the following

public Mapper(ISession session)
{
   Status = "Connected";
   this.session = session;

   session.StatusChanged += (sender, e) =>
   {
      Status = "Disconnected";
   };
}

The assumption is that we have a string property Status and that if a status change event is received the status should switch from Connected to Disconnected.

Firstly we need to handle the expectation of the += being called on the event in ISession, so our test would look like this

[Test]
public void TestMock()
{
   ISession session = repository.CreateMock<ISession>();

   session.StatusChanged += null;
   LastCall.IgnoreArguments();

   repository.ReplayAll();

   Mapper mapper = new Mapper(session);
   Assert.AreEqual("Connected", mapper.Status);
}

Notice we use LastCall to create an expectation on the += being called on the StatusChanged event. This should run without any errors.

Now we want to change things to see if the Mapper Status changes when a StatusChanged event takes place. So we need a way to raise the StatusChanged event. RhinoMocks includes the IEventRaiser interface for this, so rewriting our test as follows, will solve this requirement

[Test]
public void TestMock()
{
   ISession session = repository.CreateMock<ISession>();

   session.StatusChanged += null;
   LastCall.IgnoreArguments();

   IEventRaiser raiser = LastCall.GetEventRaiser();

   repository.ReplayAll();

   Mapper mapper = new Mapper(session);
   Assert.AreEqual("Connected", mapper.Status);

   raiser.Raise(null, null);

   Assert.AreEqual("Disconnected", mapper.Status);
}

Notice we use the LastCall.GetEventRaiser() to get an IEventRaiser. This will allow us to raise events on the StatusChanged event. We could simply combine the LastCall’s to form

IEventRaiser raiser = LastCall.IgnoreArguments().GetEventRaiser();

The call raiser.Raise(null, null) is used to actually raise the event from our test, the two arguments match the arguments on an EventHandler, i.e. an object (for the sender) and EventArgs.

More types of mocks

Along with CreateMock and DynamicMock you may notice some other mock creation methods.

What are the *MultiMocks?

CreateMultiMock and DynamicMultiMock allow us to create a mock (strict semantics for CreateMultiMock and loose for DynamicMultiMock) but supporting multiple types. In other words let’s assume our implementation of ISession is expected to support another interface, IStatusUpdate and this will have the event we’re previously declare, i.e.

public interface IStatusUpdate
{
   event EventHandler StatusChanged;
}

Now we change the Mapper constructor to allow it to check if the ISession also supports IStatusUpdate and only then subscribe to it’s event, for example

public Mapper(ISession session)
{
   Status = "Connected";
   this.session = session;

   IStatusUpdate status = session as IStatusUpdate;
   if (status != null)
   {
      status.StatusChanged += (sender, e) =>
      {
         Status = "Disconnected";
      };
   }
}

and finally let’s change the test to look like

[Test]
public void TestMock()
{
   ISession session = repository.CreateMultiMock<ISession>(typeof(IStatusUpdate));

   ((IStatusUpdate)session).StatusChanged += null;

   IEventRaiser raiser = LastCall.IgnoreArguments().GetEventRaiser();

   repository.ReplayAll();

   Mapper mapper = new Mapper(session);
   Assert.AreEqual("Connected", mapper.Status);

   raiser.Raise(null, null);

   Assert.AreEqual("Disconnected", mapper.Status);
}

As you can see, we’ve now created an mock ISession object which also supports IStatusUpdate.

PartialMock

The partial mock allows us to mock part of a class. For example, let’s do away with our Mapper and just write a test to check what’s returned from this new Session class

public class Session
{
   public virtual string Connect()
   {
      return "none";
   }
}

and our test looks like this

[Test]
public void TestMock()
{
   Session session = repository.PartialMock<Session>();

   repository.ReplayAll();

   Assert.AreEqual("none", session.Connect());
}

This will run and succeed when we use the PartialMock as it automatically uses the Session objects Connect method, but we can override this by using the following

[Test]
public void TestMock()
{
   Session session = repository.PartialMock<Session>();

   Expect.Call(session.Connect()).Return("hello").Repeat.Once();

   repository.ReplayAll();

   Assert.AreEqual("hello", session.Connect());
}

Now if instead we use CreateMock in the above this will still work, but if we remove the Expect.Call the mock does not fall back to using the Session Connect method but instead fails with an exception, ExpectationViolationException.

So if you need to mock a concrete object but have the code use the concrete class methods in places, you can use the PartialMock.

Note: You methods on the Session class need to be marked as virtual for the above to work

Obviously a PartialMultiMock can be used to implement more than one type.

Stubs

A stub is generally seen as an implementation of a class with minimal functionality, i.e. if we were to implement any of our ISession interfaces (shown in this post) and for properties we simply set and get from a backing store, methods return defaults and do nothing. Methods could return values but it’s all about minimal implementations and consistency. Unlike mocks we’re not trying to test behavior, so we’re not interested in whether a method was called once or a hundred times.

Often a mock with loose semantics will suffice but RhinoMock includes a specific type that’s created via

repository.Stub<ISession>();

the big difference between this and a dynamic mock is that in essense properties are all declared as

Expect.Call(session.Name).PropertyBehavior();

implicitly (PropertyBehavior is discussed in the next section). This means if we run a test using a dynamic mock, such as

ISession session = repository.DynamicMock<ISession>();
repository.ReplayAll();

session.Name = "Hello";

Assert.AreEqual(null, session.Name);

The property session.Name will be null even though we assigned it “Hello”. Using a stub, RhinoMocks gives us an implementation of the property setter/getter and thus the following would result in a test passing

ISession session = repository.DynamicMock<ISession>();
repository.ReplayAll();

session.Name = "Hello";

Assert.AreEqual("Hello", session.Name);

i.e. session.Name now has the value “Hello”.

Mocking properties

So, we’ve got the following interface

public interface ISession
{
   string Name { get; set; }
}

now what if we want to handle the getter and setter as if they were just simple setters and getters (i.e. implemented exactly as shown in the interface). Instead of creating return values etc. we can use a short cut

Expect.Call(session.Name).PropertyBehavior();

which basically creates an implementation of the property which we can now set and get from without setting full expectations, i.e. the following test shows us changing the Name property after the replay

[Test]
public void TestMock()
{
   ISession session = repository.CreateMock<ISession>();

   Expect.Call(session.Name).PropertyBehavior();

   repository.ReplayAll();

   session.Name = "Hello";
   Assert.AreEqual("Hello", session.Name);
}

Generating classes from XML using xsd.exe

The XML Schema Definition Tool (xsd.exe) can be used to generate xml schema files from XML and better still C# classes from xml schema files.

Creating classes based upon an XML schema file

So in it’s simplest usage we can simply type

xsd person.xsd /classes

and this generates C# classes representing the xml schema. The default output is C# but using the /language or the shorter form /l switch we can generate Visual Basic using the VB value, JScript using JS or CS if we wanted to explicitly static the language was to be C#. So for example using the previous command line but now to generate VB code we can write

xsd person.xsd /classes /l:VB

Assuming we have an xml schema, person.xsd, which looks like this

<?xml version="1.0" encoding="utf-8"?>
<xs:schema elementFormDefault="qualified" xmlns:xs="http://www.w3.org/2001/XMLSchema">
  <xs:element name="Person" nillable="true" type="Person" />
  <xs:complexType name="Person">
    <xs:sequence>
      <xs:element minOccurs="0" maxOccurs="1" name="FirstName" type="xs:string" />
      <xs:element minOccurs="0" maxOccurs="1" name="LastName" type="xs:string" />
      <xs:element minOccurs="1" maxOccurs="1" name="Age" type="xs:int" />
    </xs:sequence>
  </xs:complexType>
</xs:schema>

The class created (in C#) looks like the following (comments removed)

[System.CodeDom.Compiler.GeneratedCodeAttribute("xsd", "4.0.30319.17929")]
[System.SerializableAttribute()]
[System.Diagnostics.DebuggerStepThroughAttribute()]
[System.ComponentModel.DesignerCategoryAttribute("code")]
[System.Xml.Serialization.XmlRootAttribute(Namespace="", IsNullable=true)]
public partial class Person {
    
    private string firstNameField;
    
    private string lastNameField;
    
    private int ageField;
    
    public string FirstName {
        get {
            return this.firstNameField;
        }
        set {
            this.firstNameField = value;
        }
    }
    
    public string LastName {
        get {
            return this.lastNameField;
        }
        set {
            this.lastNameField = value;
        }
    }
    
    public int Age {
        get {
            return this.ageField;
        }
        set {
            this.ageField = value;
        }
    }
}

Creating an XML schema based on an XML file

It might be that we’ve got an XML file but no xml schema, so we’ll need to convert that to an xml schema before we can generate our classes file. Again we can use xsd.exe

xsd person.xml

the above will create an xml schema based upon the XML file, obviously this is limited to what is available in the XML file itself, so if your XML doesn’t have “optional” elements/attributes xsd.exe obviously cannot include those in the schema it produces.

Assuming we therefore started with an XML file, the person.xml, which looks like the following

<?xml version="1.0" encoding="utf-8"?>

<Person>
   <FirstName>Spoungebob</FirstName>
   <LastName>Squarepants</LastName>
   <Age>21</Age>
</Person>

Note: I’ve no idea if that is really SpongeBob’s age.

Running xsd.exe against person.xml file we get the following xsd schema

<?xml version="1.0" encoding="utf-8"?>
<xs:schema id="NewDataSet" xmlns="" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:msdata="urn:schemas-microsoft-com:xml-msdata">
  <xs:element name="Person">
    <xs:complexType>
      <xs:sequence>
        <xs:element name="FirstName" type="xs:string" minOccurs="0" />
        <xs:element name="LastName" type="xs:string" minOccurs="0" />
        <xs:element name="Age" type="xs:string" minOccurs="0" />
      </xs:sequence>
    </xs:complexType>
  </xs:element>
  <xs:element name="NewDataSet" msdata:IsDataSet="true" msdata:UseCurrentLocale="true">
    <xs:complexType>
      <xs:choice minOccurs="0" maxOccurs="unbounded">
        <xs:element ref="Person" />
      </xs:choice>
    </xs:complexType>
  </xs:element>
</xs:schema>

From this we could now create our classes as previously outlined.

Creating an xml schema based on .NET type

What if we’ve got a class/type and we want to serialize it as XML, let’s use xsd.exe to create the XML schema for us.

If the class looks like the following

public class Person
{
   public string FirstName { get; set; }
   public string LastName { get; set; }
   public int Age { get; set; }
}
[code]

<em>Note: Assuming the class is compiled into an assembly call DomainObjects.dll</em>

Then running xsd.exe with the following command line

[code language="xml"]
xsd.exe DomainObjects.dll /type:Person

will then generate the following xml schema

<?xml version="1.0" encoding="utf-8"?>
<xs:schema elementFormDefault="qualified" xmlns:xs="http://www.w3.org/2001/XMLSchema">
  <xs:element name="Person" nillable="true" type="Person" />
  <xs:complexType name="Person">
    <xs:sequence>
      <xs:element minOccurs="0" maxOccurs="1" name="FirstName" type="xs:string" />
      <xs:element minOccurs="0" maxOccurs="1" name="LastName" type="xs:string" />
      <xs:element minOccurs="1" maxOccurs="1" name="Age" type="xs:int" />
    </xs:sequence>
  </xs:complexType>
</xs:schema>

You’ll notice this is slightly different from the code generated from the person.xml file.

Messing around with JustMock lite

I’ve been trying out JustMock Lite (hereon known as JML), from Telerik – the lite version is free and source available on Github. The package is installable via NuGet.

So let’s start with a simple Arrange-Act-Assert sample

IFeed feed = Mock.Create<IFeed>();

// arrange
Mock.Arrange(() => feed.Update(10)).Returns(true).OccursOnce();

// act
feed.Update(10);

// assert
Mock.Assert(feed);

The example above shows how we create a mock object based upon the IFeed interface. We then arrange the mocked methods etc. The next step in the sample above is where we use the mocked methods before finally setting assertions.

Note: We do not get a “mock” type back from Mock.Create as we would with a framework like Moq, but instead we get the IFeed which I rather like, not having to use the mock’s Object property to get the type being mocked. This is because in Moq the setup/arrange phase and for that matter the assert phase are all instance methods on the mock object, in JML we use static methods on the Mock class

Loose vs Strict

By default JML creates mocks with Behavior.Loose which means that we don’t need to supply all calls on the mock object upfront via the arrange mechanism. In other words, using Behavior.Loose simply means we might make calls on a mocked object’s methods (for example) without having to explicitly setup the Arrange calls and we’ll get default beaviour. Behavior.Strict means any calls we make on the mock object must have been set-up prior to being called on the mocked object.

Let’s look at an example of using JML’s strict behaviour

public interface IReader
{
   IEnumerable<string> ReadLine();
   string ReadToEnd();
}

[Fact]
public void ReadLine_EnsureCsvReaderUsesUnderlyingReader()
{
   IReader reader = Mock.Create<IReader>(Behavior.Strict);

   Mock.Arrange(() => reader.ReadLine()).Returns((IEnumerable<string>)null);

   CsvReader csv = new CsvReader(reader);
   csv.ReadLine();

   Mock.Assert(reader);
}

In the above, assuming (for the moment) that csv.ReadLine() calls the IReader ReadLine method, then all will be work. But if we remove the Mock.Arrange call we’ll get a StrictMockException as we’d expect as we’ve not setup the Arrange calls. Switching to Behavior.Loose in essence gives us a default implementation of the IReader ReadLine (as we’ve not explicitly provided one via the Mock.Arrange method) and all will work again.

As per other mocking frameworks this simply means if we want to enforce a strict requirement for each call on our mocked object to first be arranged, then we must do this explicitly.

JML also has two other behaviors, Behavior.RecursiveLoose which allows us to create loose mocking on all levels of the mocked object.

The Behavior.CallOriginal sets the mock object up to, by default, call the actual mocked object’s methods/properties. Obviously this means it cannot be used on an interface or abstract method, but what it does mean is that we can mock a class’s virtual method/property (JustMock elevated – the commercial version of JustMock – looks like it supports non virtual/abstract mocking on classes) and by default call the original object’s methods and only Arrange those methods/properties we want to alter.

For example, the following code will pass our test as JML will call our original code and does not require we Arrange the return on the property Name

public class Reader
{
   public virtual string Name { get { return "DefaultReader"; }}
}

[Fact]
public void Name_ShouldBeAsPerTheImplementation()
{
   Reader reader = Mock.Create<Reader>(Behavior.CallOriginal);

   Assert.Equal("DefaultReader", reader.Name);

   Mock.Assert(reader);
}

In some mocking frameworks, such as Moq, will intercept the Name property call and return the default (null) value instead (assuming we’ve not setup any returns ofcourse).

More on CallOriginal

Behavior.CallOriginal sets up the mocked object as, by default, calling the original implementation code, but we can also setup Arrange calls to call the original implementation more explicitly.

For example

public class Reader
{
   public virtual string GetValue(string key)
   {
      return "default";
   }
}

Reader reader = Mock.Create<Reader>();

Mock.Arrange(() => reader.GetValue(null)).Returns("NullReader");
Mock.Arrange(() => reader.GetValue("key")).CallOriginal();

Assert.Equal("NullReader", reader.GetValue(null));
Assert.Equal("default", reader.GetValue("key"));

Mock.Assert(reader);

So here when reader.GetValue is called with the argument “key” the original (concrete implementation) or the GetValue method is called.

Note: Moq also implements such a capability using the CallBase() method