Monthly Archives: April 2014

Mongodb Replication & Recovery

In a non-trivial system, we’d normally look to have three types of database set-up. A primary would be set-up as a writeable database, one or more secondary databases would be set-up as readonly databases and finally and arbiter is set-up to be used to help decide which secondary database takes over in the case of the primary database going down.

Note: An arbiter is added to stop tied votes when deciding a secondary to take over as primary and thus should only be used where an even number of instances of mongodb exists in a replication set.

The secondary databases will be “eventually consistent” in that when data is written to the primary database it is not immediately replicated to the secondary databases, but will “eventually” be replicated.

Let’s look at an example replication set…

To set-up a replication set, we would start with a minimum of three instances of, or machines running, mongodb. As previously mentioned, this replication set would consist of a primary and secondary database and arbiter.

Let’s run three instances on a single machine to begin with, so we need to create three database folders, foe example

mkdir MyData\database1
mkdir MyData\database2
mkdir MyData\database3

Obviously, if all three are running on the same machine, we need to give the mongodb instances their own ports, for example run the following commands each in their own command prompt

mongod --dbpath /MyData\database1 --port 30000 --replSet "sample"
mongod --dbpath /MyData\database2 --port 40000 --replSet "sample"
mongod --dbpath /MyData\database3 --port 50000 --replSet "sample"

“sample” denotes a arbitrary, user-defined name for our replication set. However the replication set still hasn’t been created at this point. We instead need to run the shell against one of the servers, for example

Note: the sample above, showing all databases on the same machine is solely an example, obviously no production system should implement this strategy, each instance of primary, secondary and arbiter, should be run on it’s own machine.

mongo --port 30000

Now we need to create the configuration for our replication set, for example

var sampleConfiguration =
{ _id : "sample", 
   members : [
     {_id : 0, host : 'localhost:30000', priority : 10 },
     {_id : 1, host : 'localhost:40000'},
     {_id : 2, host : 'localhost:50000', arbiterOnly : true } 
   ]
}

This sets up the replication set, stating the host on port 300000 is the primary (due to it’s priority being set, in this example). The host on port 40000 doesn’t have a priority (or abiterOnly) so this is the secondary and finally we have the arbiter.

At this point we’ve created the configuration but we still need to actually initiate/run the configuration. So, again, from the shell we write

rs.initiate(sampleConfiguration)

Note: This will take a few minutes to configure all the instances which make up the replication set. Eventually the shell will return from initiate call and should say “ok”.

The shell prompt should now change to show the replication set name of the currently connected server (i.e. PRIMARY).

Now if we write data to the primary it will “eventually” be replicated to all secondary databases.

If we take the primary database offline (or worse still a fault occurs and it’s taken offline without our involvement) a secondary database will be promoted to become the primary database (obviously in our example we only have one secondary, so this will take over as the primary). If/when the original primary comes back online, it will again become the primary database and the secondary will, of course, return to being a secondary database.

Don’t forget you can use

rs.help()

to view help for the various replication commands.

Entity Framework – lazy & eager loading

By default Entity Framework will lazy load any related entities. If you’ve not come across Lazy Loading before it’s basically coding something in such a way that either the item is not retrieved and/or not created until you actually want to use it. For example, the code below shows the AlternateNames list is not instantiated until you call the property.

public class Plant
{
   private IList<AlternateName> alternateNames;

   public virtual IList<AlternateName> AlternateNames
   {
      get
      {
         return alternateNames ?? new List<AlternateName>();
      }
   }
}

So as you can see from the example above we only create an instance of IList when the AlternateNames property is called.

As stated at the start of this post, by default Entity Framework defaults to lazy loading which is perfect in most scenarios, but let’s take one where it’s not…

If you are returning an instance of an object (like Plant above), AlternateNames is not loaded until it’s referenced, however if you were to pass the Plant object over the wire using something like WCF, AlternateNames will not get instantiated. The caller/client will try to access the AlternateNames property and of course it cannot now be loaded. What we need to do is ensure the object is fully loaded before passing it over the wire. To do this we need to Eager Load the data.

Eager Loading is the process of ensuring a lazy loaded object is fully loaded. In Entity Framework we achieve this using the Include method, thus

return context.Plants.Include("AlternateNames");

Comparing Moq and JustMock lite

This is not meant as a “which is best” post or even a feature blow by blow comparison but more a “I’m using JustMock lite (known henceforth as JML) how do I do this in Moq” or vice versa.

Please note, MOQ version 4.20 has introduced a SponsoreLink which appears to send data to some third party. See discussions on GitHub.

For this post I’m using Moq 4.2.1402.2112 and JML 20014.1.1424.1

For the code samples, I’m writing xUnit tests but I’m not necessarily going to write code to use the mocks but will instead call directly on the mocks to demonstrate solely how they would work. Such tests would obviously only really ensure the mocking framework worked as expected, but hopefully the ideas of the mocks usage is conveyed in as little code as possible.

Strict behaviour

By default both Moq and JML use loose behavior meaning simply that if we do not create any Setup or Arrange code for methods/properties being mocked, then the mocking framework will default them. When using strict behavior we are basically saying if a method or property is called on the mock object and we’ve not setup any behaviors for the mock object, then the mocking framework should fail – meaning we’ll get an exception from the mocking framework.

Following is an example of using the strict behavior – removing the Setup/Arrange will cause a mocking framework exception, adding the Setup/Arrange will fulfill the strict behavior and allow the code to complete

Using Moq

Mock<IFeed> feed = new Mock<IFeed>(MockBehavior.Strict);

feed.Setup(f => f.GetTitle()).Returns("");

feed.Object.GetTitle();

Using JML

IFeed feed = Mock.Create<IFeed>(Behavior.Strict);

Mock.Arrange(() => feed.GetTitle()).Returns("");

feed.GetTitle();

Removing the MockBehavior.Strict/Behavior.Strict from the mock calls will switch to loose behaviors.

Ensuring a mocked method/property is called n times

Occasionally we want to ensure that a method/property is called a specified number of times only, for example, once, at least n times, at most n etc.

Using Moq

Mock<IFeed> feed = new Mock<IFeed>();

feed.Setup(f => f.GetTitle()).Returns("");

feed.Object.GetTitle();

feed.Verify(f => f.GetTitle(), Times.Once);

Using JML

IFeed feed = Mock.Create<IFeed>();

Mock.Arrange(() => feed.GetTitle()).Returns("").OccursOnce();

feed.GetTitle();

Mock.Assert(feed);

In both examples we could change OccursOnce()/Times.Once to OccursNever()/Times.Never or Occurs(2)/Times.Exactly(2) and so on.

Throwing exceptions

On occasion we may want to mock an exception, maybe our IFeed throws a WebException if it cannot download data from a website, we want to simulate this on our mock object -then we can use the following

Using Moq

Mock<IFeed> feed = new Mock<IFeed>();

feed.Setup(f => f.Download()).Throws<WebException>();

Assert.Throws<WebException>(() => feed.Object.Download());

feed.Verify();

Using JML

IFeed feed = Mock.Create<IFeed>();

Mock.Arrange(() => feed.Download()).Throws<WebException>();

Assert.Throws<WebException>(() => feed.Download());

Mock.Assert(feed);

Supporting multiple interfaces

Occasionally we might be mocking an interface, such as IFeed but our application will check if the IFeed object also supports IDataErrorInfo (for example) and handle the code accordingly. So, without actually changing the IFeed what we would expect is a concrete class which implements both interfaces.

Using Moq

Mock<IFeed> feed = new Mock<IFeed>();
feed.As<IDataErrorInfo>();

Assert.IsAssignableFrom(typeof(IDataErrorInfo), feed.Object);

Using JML

IFeed feed = Mock.Create<IFeed>(r => r.Implements<IDataErrorInfo>());

Assert.IsAssignableFrom(typeof(IDataErrorInfo), feed);

As you can see, we add interfaces to our mock in Moq by using the As method and in JML using the Implements method, we can change these methods together to also add further interfaces to our mock as per

Using Moq

Mock<IFeed> feed = new Mock<IFeed>();
feed.As<IDataErrorInfo>().
     As<INotifyPropertyChanged>();

Assert.IsAssignableFrom(typeof(IDataErrorInfo), feed.Object);
Assert.IsAssignableFrom(typeof(INotifyPropertyChanged), feed.Object);

Using JML

IFeed feed = Mock.Create<IFeed>(r => 
   r.Implements<IDataErrorInfo>().
     Implements<INotifyPropertyChanged>());

Assert.IsAssignableFrom(typeof(IDataErrorInfo), feed);
Assert.IsAssignableFrom(typeof(INotifyPropertyChanged), feed);

Automocking

One of the biggest problems when unit testing using mocks is when a system under test (SUT) requires many parts to be mocked and setup, or if the code for the SUT changes often, requiring refactoring of tests to simply add/change etc. the mock objects used.

As you’ve already seen with Loose behavior we can get around the need to setup every single bit of code and thus concentrate our tests on specific areas without creating a thousand and one mock’s and setup/arrange sections of code. But in a possibly ever changing SUT it would be good if we didn’t need to continually add/remove mocks which we might not be testing against.

What would be nice is if the mocking framework could work like an IoC system and automatically inject the mocks for us – this is basically what auto mocking is about.

So if we look at the code below, assume for a moment that initially the code didn’t include IProxySettings, we write our IFeedList mock and write the code to test the RssReader, now we add a new interface IProxySettings and now we need to alter the tests to include this interface even though our current test code doesn’t need it. Ofcourse with the addition of a single interface this may seem to be a little over the top, however it can easily get a lot worse.

So here’s the code…

System under test and service code

public interface IFeedList
{
   string Download();
}

public interface IProxySettings
{		
}

public class RssReader
{
   private IFeedList feeds;
   private IProxySettings settings;

   public RssReader(IProxySettings settings, IFeedList feeds)
   {
      this.settings = settings;
      this.feeds = feeds;
   }

   public string Download()
   {
      return feeds.Download();
   }
}

Now when the auto mocking container mocks the RssReader, it will automatically inject mocks for the two interfaces, then it’s up to our test code to setup or arrange expectations etc. on it.

Using Moq

Unlike the code you will see (further below) for JML, Moq doesn’t come with a auto mock container by default (JML NuGet’s package will add the Telerik.JustMock.Container by default). Instead Moq appears to have several auto mocking containers created for use with it by the community at large. I’m going to concentrate on Moq.Contrib which includes the AutoMockContainer class.

MockRepository repos = new MockRepository(MockBehavior.Loose);
AutoMockContainer container = new AutoMockContainer(repos);

RssReader rss = container.Create<RssReader>();

container.GetMock<IFeedList>().Setup(f => f.Download()).Returns("Data");

Assert.Equal("Data", rss.Download());

repos.VerifyAll();

Using JML

var container = new MockingContainer<RssReader>();

container.Arrange<IFeedList>(f => f.Download()).Returns("Data");

Assert.Equal("Data", container.Instance.Download());

container.AssertAll();

In both cases the auto mock container created our RssReader, mocking the interfaces passed to it.

That’s it for now, I’ll add further comparisons as and when I get time.

Getting started with Linq Expressions

The Expression class is used to represent expression trees and is seen in use within LINQ. If you’ve been creating your own LINQ provider you’ll also have come across Expressions. For example see my post Creating a custom Linq Provider on this subject.

Getting started with the Expression class

Expression objects can be used in various situations…

Let’s start by looking at using Expressions to represent lambda expressions.

Expression<Func<bool>> e = () => a < b;

In the above we declare an Expression which takes a Func which takes no arguments and returns a Boolean. On the right hand side of the assignment operator we can see an equivalent lambda expression, i.e. one which takes no arguments and returns a Boolean.

From this Expression we can then get at the function within the Expression by calling the Compile method thus

Func<bool> f = e.Compile();

We could also create the same lambda expression using the Expression’s methods. For example

ConstantExpression lParam = Expression.Constant(a, typeof(int));
ConstantExpression rParam = Expression.Constant(b, typeof(int));
BinaryExpression lessThan = Expression.LessThan(lParam, rParam);
Expression<Func<bool>> e = Expression.Lambda<Func<bool>>(lessThan);

This probably doesn’t seem very exciting in itself, but if we can create an Expression from a lambda then we can also deconstruct an lambda into an Expression tree. So in the previous lambda example we could look at the left and right side of the a < b expression and find the types and other such things, we could evaluate the parts or simply traverse the expression and create database for it, but that’s a subject beyond this post.

An alternate use

An interesting use of Expressions can be found in many MVVM base classes (or the likes). I therefore take absolutely no credit for the idea.

The scenario is this. We want to create a base class for handling the INotifyPropertyChanged interface, it will look like this

public class PropertyChangedObject : INotifyPropertyChanged
{
   public event PropertyChangedEventHandler PropertyChanged;

   public void OnPropertyChanged(string propertyName)
   {
      if (PropertyChanged != null)
      {
         PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
      }
   }
}

Next let’s write a simple class to use this, such as

public class MyObject : PropertyChangedObject
{
   private string name;

   public string Name
   {
      get { return name; }
      set
      {
         if (name != value)
         {
            name = value;
            OnPropertyChanged("Name");
         }
      }
   }
}

As you can see, within the setter, we need to check whether the value stored in the Name property is different to the new value passed to it and if so, update the backing field and then raise a property changed event passing a string to represent the property name.

An obvious problem with this approach is that “magic strings” can sometimes be incorrect (i.e. spelling mistakes or the likes). So it would be nicer if we could somehow pass the property name in a more typesafe and compile time checking way. It would also be nice to wrap the whole if block in an extension method which we can reuse in all the setters on our object.

Note: before we go much further with this, in .NET 4.5 there’s a better way to implement this code. See my post on the CallerMemberNameAttribute attribute.

So one way we could pass the property name, which at least ensure the property exists at compile time, is to use an Expression object which will then include all the information we need (and more).

Here’s what we want the setter code to look like

public string Name
{
   get { return name; }
   set { this.RaiseIfPropertyChanged(p => p.Name, ref name, value); }
}

The second and third arguments are self-explanatory, but for the sake of completeness let’s review them – the second argument takes a reference to the backing field. this will be set to the value contained within the third field only if the two differ. At which point we expect an OnPropertyChanged call to be made and the PropertyChanged event to occur.

The first argument is the bit relevant to the topic of this post, i.e. the Expression class.

Let’s look at the extension method that implements this and then walk through it

public static void RaiseIfPropertyChanged<TModel, TValue>(this TModel po, 
         Expression<Func<TModel, TValue>> e,  
         ref TValue backingField, 
         TValue value) where 
            TModel : PropertyChangedObject
{
   if (!EqualityComparer<TValue>.Default.Equals(backingField, value))
   {
      var m = e.Body as MemberExpression;
      if(m != null)
      {
         backingField = value;
         po.OnPropertyChanged(m.Member.Name);
      }
   }
}

The method can be used on any type which inherits from PropertyChangedObject, this is obviously so we get the method call OnPropertyChanged.

We check the equality of the backingField and value and obviously, only if they’re different do we bother doing anything. Assuming the values are different we then get the Body of the expression as a MemberExpression, on this the Member.Name property will be a string representing the name of the property supplied in the calling property, i.e. in this example the property name “Name”.

So now when we use the RaiseIfPropertyChanged extension method we have a little more type safety, i.e. the property passed to the expression must be the same type as the backing field and value and ofcourse a mis-spelled/none existent property will fail to compile as well, so lessens the chances of “magic string” typos. Obviously if we passed another property of the same type into the Expression then this will compile and seemingly work but the OnPropertyChanged event would be passed an incorrect property string and this is where the CallerMemberNameAttribute would help us further.

Step by step mocking with RhinoMock

Requirements

We’re going to work through a simple set of scenarios/tests using RhinoMocks and NUnit. So we’ll need the following to get started

I’m using the RhinoMocks NuGet package, version 3.6.1 for these samples and the NuGet package for NUnit 2.6.3.

What are we going to mock

For the sake of these examples we’re going to have an interface, which we will mock out, named ISession. Here’s the code for ISession

public interface ISession
{
   /// <summary>
   /// Subscribe to session events
   /// </summary>
   /// <param name="o">the object that recieves event notification</param>
   /// <returns>a token representing the subscription object</returns>
   object Subscribe(object o);
   /// <summary>
   /// Unsubscribe from session events
   /// </summary>
   /// <param name="token">a token supplied via the Subscribe method</param>
   void Unsubscribe(object token);
   /// <summary>
   /// Executes a command against the session object
   /// </summary>
   /// <param name="command">the command to be executed</param>
   /// <returns>an object representing any return value for the given command</returns>
   object Execute(string command);
}

Getting started

Before we can create any mock objects we need to create an instance of the factory class or more specifically the MockRepository. A common pattern within NUnit test fixtures is to create this during the SetUp, for example

[TestFixture]
public class DemoTests
{
   private MockRepository repository;

   [SetUp]
   public void SetUp()
   {
      repository = new MockRepository();
   }
}

The MockRepository is used to create mock objects but also can be used to record, replay and verify mock objects (and more). We’ll take a look at some of these methods as we go through this step by step guide.

If, as is often the cases with tests I’ve written with RhinoMock, we want to verify all expectations on our mocks. Using NUnit we can handle this in the TearDown, for example adding the following to the DemoTests class

[TearDown]
public void TearDown()
{
   repository.VerifyAll();
}

this will verify that all mock expectations have been met when the test fixture is torn down.

Mocks vs DynamicMocks

In previous posts on Moq and JustMock Lite I’ve mentioned strict and loose behavior. Basically loose behavior on a mock object means I do not need to supply expectations for every method call, property invocation etc. on a mocked object, whereas strict means the opposite, in that if we do not supply the expectation an ExpectationViolationExpectation will be thrown by RhinoMocks.

In RhinoMock terminology, mocks have strict semantics whereas dynamic mocks have loose.

So, to see this in action let’s add two tests to our test fixture

[Test]
public void TestMock()
{
   ISession session = repository.CreateMock<ISession>();
   repository.ReplayAll();

   Mapper mapper = new Mapper(session);
}

[Test]
public void TestDynamic()
{
   ISession session = repository.DynamicMock<ISession>();
   repository.ReplayAll();

   Mapper mapper = new Mapper(session);
}

Our Mapper class looks like the following

public class Mapper
{
   private readonly ISession session;
   private object token;

   public Mapper(ISession session)
   {
      this.session = session;

      token = session.Subscribe(this);
   }
}

don’t worry about repository.ReplayAll(), we’ll get to that in a minute or two.

Now if we run these two tests TestDynamic will succeed whereas TestMock will fail with an ExpectationViolationException. The dynamic mock worked because of the loose semantics which means it does not require all expectations set before usage. We can fix the TestMock method by writing an expectation for the call to the Subscribe method on the ISession interface.

So changing the test to look like the following

[Test]
public void TestMock()
{
   ISession session = repository.CreateMock<ISession>();
   Expect.Call(session.Subscribe(null)).IgnoreArguments().Return(null).Repeat.Any();

   repository.ReplayAll();

   Mapper mapper = new Mapper(session);
}

So in the above code we arrange our expectations. Basically we’re saying expect a call on the Subscribe method of the session mock object. In this case we pass in null and tell the mock to ignore the arguments, removing IgnoreArguments means we expect that Mapper will call the Subscribe method passing the exact arguments supplied in the expectation, i.e. in this case null.

Next we’re setting the expectation to return null and we don’t care how many times this method is called, so we call Repeat.Any(). If we wish to change the expectation to ensure the method is called just the once, we can change this to Repeat.Once() which is obviously more specific and useful for catching scenarios where a method is accidentally called more times than necessary. In our Mapper’s case this cannot happen and the method can only be called once, so we’d normally set this to Repeat.Once().

What we’ve done is supply defaults which is what the dynamic mock object would have probably implemented for our expectations as well. Hence why I used Repeat.Any() to begin with, so the implementation above will now cause the test to succeed.

Record/Playback

Now to return to repository.ReplayAll(). RhinoMocks works in a record/playback way, that is by default it’s in record mode so if in TestDynamic we comment out repository.ReplayAll() we’ll get the exception InvalidOperationException. The mock object is in a record state. We arrange our expectation in the record phase then act upon them during playback. As we are, by default, in record mode we can simply start creating our expectations, then when we’re ready to act on those mocked objects we switch the MockRepository to playback mode using repository.ReplayAll().

Arrange

As already mentioned we need to setup expectations on our mock object (unless we’re using dynamic mocks ofcourse). We do this during the arrange phase as was shown with the line

Expect.Call(session.Subscribe(null)).IgnoreArguments().Return(null).Repeat.Any();

One gotcha is if your method takes no arguments and returns void. So let’s assume ISession now has a method DoSomething which takes no arguments and returns void and see what happens…

Trying to write the following

Expect.Call(session.DoSomething()).Repeat.Any();

will fail to compile as we cannot convert from void to Rhino.Mocks.Expect.Action, we can easily fix by removing the () and using the following line

Expect.Call(session.DoSomething).Repeat.Any();

Equally if the ISession had a property named Result which was of type string we can declare the expectation as follows

Expect.Call(session.Result).Return("hello").Repeat.Any();

We can also setup an expectation on a method call using the following

session.Subscribe(null);
LastCall.IgnoreArguments().Return(null).Repeat.Any();

in this case the LastCall allows us to set our expectations on the previous method call, i.e. this is equivalent to our previous declaration for the expectation on the Subscribe method. This syntax is often used when dealing with event handlers.

Mocking Event Handlers

Let’s assume we have the following on the ISession

event EventHandler StatusChanged;

the idea being a session object may change, maybe to a disconnect state, and we want to have the Mapper respond to this in some way. Then we want to cause events to fire and then see whether the Mapper changes accordingly.

Okay, so let’s rewrite the Mapper constructor to look like the following

public Mapper(ISession session)
{
   Status = "Connected";
   this.session = session;

   session.StatusChanged += (sender, e) =>
   {
      Status = "Disconnected";
   };
}

The assumption is that we have a string property Status and that if a status change event is received the status should switch from Connected to Disconnected.

Firstly we need to handle the expectation of the += being called on the event in ISession, so our test would look like this

[Test]
public void TestMock()
{
   ISession session = repository.CreateMock<ISession>();

   session.StatusChanged += null;
   LastCall.IgnoreArguments();

   repository.ReplayAll();

   Mapper mapper = new Mapper(session);
   Assert.AreEqual("Connected", mapper.Status);
}

Notice we use LastCall to create an expectation on the += being called on the StatusChanged event. This should run without any errors.

Now we want to change things to see if the Mapper Status changes when a StatusChanged event takes place. So we need a way to raise the StatusChanged event. RhinoMocks includes the IEventRaiser interface for this, so rewriting our test as follows, will solve this requirement

[Test]
public void TestMock()
{
   ISession session = repository.CreateMock<ISession>();

   session.StatusChanged += null;
   LastCall.IgnoreArguments();

   IEventRaiser raiser = LastCall.GetEventRaiser();

   repository.ReplayAll();

   Mapper mapper = new Mapper(session);
   Assert.AreEqual("Connected", mapper.Status);

   raiser.Raise(null, null);

   Assert.AreEqual("Disconnected", mapper.Status);
}

Notice we use the LastCall.GetEventRaiser() to get an IEventRaiser. This will allow us to raise events on the StatusChanged event. We could simply combine the LastCall’s to form

IEventRaiser raiser = LastCall.IgnoreArguments().GetEventRaiser();

The call raiser.Raise(null, null) is used to actually raise the event from our test, the two arguments match the arguments on an EventHandler, i.e. an object (for the sender) and EventArgs.

More types of mocks

Along with CreateMock and DynamicMock you may notice some other mock creation methods.

What are the *MultiMocks?

CreateMultiMock and DynamicMultiMock allow us to create a mock (strict semantics for CreateMultiMock and loose for DynamicMultiMock) but supporting multiple types. In other words let’s assume our implementation of ISession is expected to support another interface, IStatusUpdate and this will have the event we’re previously declare, i.e.

public interface IStatusUpdate
{
   event EventHandler StatusChanged;
}

Now we change the Mapper constructor to allow it to check if the ISession also supports IStatusUpdate and only then subscribe to it’s event, for example

public Mapper(ISession session)
{
   Status = "Connected";
   this.session = session;

   IStatusUpdate status = session as IStatusUpdate;
   if (status != null)
   {
      status.StatusChanged += (sender, e) =>
      {
         Status = "Disconnected";
      };
   }
}

and finally let’s change the test to look like

[Test]
public void TestMock()
{
   ISession session = repository.CreateMultiMock<ISession>(typeof(IStatusUpdate));

   ((IStatusUpdate)session).StatusChanged += null;

   IEventRaiser raiser = LastCall.IgnoreArguments().GetEventRaiser();

   repository.ReplayAll();

   Mapper mapper = new Mapper(session);
   Assert.AreEqual("Connected", mapper.Status);

   raiser.Raise(null, null);

   Assert.AreEqual("Disconnected", mapper.Status);
}

As you can see, we’ve now created an mock ISession object which also supports IStatusUpdate.

PartialMock

The partial mock allows us to mock part of a class. For example, let’s do away with our Mapper and just write a test to check what’s returned from this new Session class

public class Session
{
   public virtual string Connect()
   {
      return "none";
   }
}

and our test looks like this

[Test]
public void TestMock()
{
   Session session = repository.PartialMock<Session>();

   repository.ReplayAll();

   Assert.AreEqual("none", session.Connect());
}

This will run and succeed when we use the PartialMock as it automatically uses the Session objects Connect method, but we can override this by using the following

[Test]
public void TestMock()
{
   Session session = repository.PartialMock<Session>();

   Expect.Call(session.Connect()).Return("hello").Repeat.Once();

   repository.ReplayAll();

   Assert.AreEqual("hello", session.Connect());
}

Now if instead we use CreateMock in the above this will still work, but if we remove the Expect.Call the mock does not fall back to using the Session Connect method but instead fails with an exception, ExpectationViolationException.

So if you need to mock a concrete object but have the code use the concrete class methods in places, you can use the PartialMock.

Note: You methods on the Session class need to be marked as virtual for the above to work

Obviously a PartialMultiMock can be used to implement more than one type.

Stubs

A stub is generally seen as an implementation of a class with minimal functionality, i.e. if we were to implement any of our ISession interfaces (shown in this post) and for properties we simply set and get from a backing store, methods return defaults and do nothing. Methods could return values but it’s all about minimal implementations and consistency. Unlike mocks we’re not trying to test behavior, so we’re not interested in whether a method was called once or a hundred times.

Often a mock with loose semantics will suffice but RhinoMock includes a specific type that’s created via

repository.Stub<ISession>();

the big difference between this and a dynamic mock is that in essense properties are all declared as

Expect.Call(session.Name).PropertyBehavior();

implicitly (PropertyBehavior is discussed in the next section). This means if we run a test using a dynamic mock, such as

ISession session = repository.DynamicMock<ISession>();
repository.ReplayAll();

session.Name = "Hello";

Assert.AreEqual(null, session.Name);

The property session.Name will be null even though we assigned it “Hello”. Using a stub, RhinoMocks gives us an implementation of the property setter/getter and thus the following would result in a test passing

ISession session = repository.DynamicMock<ISession>();
repository.ReplayAll();

session.Name = "Hello";

Assert.AreEqual("Hello", session.Name);

i.e. session.Name now has the value “Hello”.

Mocking properties

So, we’ve got the following interface

public interface ISession
{
   string Name { get; set; }
}

now what if we want to handle the getter and setter as if they were just simple setters and getters (i.e. implemented exactly as shown in the interface). Instead of creating return values etc. we can use a short cut

Expect.Call(session.Name).PropertyBehavior();

which basically creates an implementation of the property which we can now set and get from without setting full expectations, i.e. the following test shows us changing the Name property after the replay

[Test]
public void TestMock()
{
   ISession session = repository.CreateMock<ISession>();

   Expect.Call(session.Name).PropertyBehavior();

   repository.ReplayAll();

   session.Name = "Hello";
   Assert.AreEqual("Hello", session.Name);
}

Generating classes from XML using xsd.exe

The XML Schema Definition Tool (xsd.exe) can be used to generate xml schema files from XML and better still C# classes from xml schema files.

Creating classes based upon an XML schema file

So in it’s simplest usage we can simply type

xsd person.xsd /classes

and this generates C# classes representing the xml schema. The default output is C# but using the /language or the shorter form /l switch we can generate Visual Basic using the VB value, JScript using JS or CS if we wanted to explicitly static the language was to be C#. So for example using the previous command line but now to generate VB code we can write

xsd person.xsd /classes /l:VB

Assuming we have an xml schema, person.xsd, which looks like this

<?xml version="1.0" encoding="utf-8"?>
<xs:schema elementFormDefault="qualified" xmlns:xs="http://www.w3.org/2001/XMLSchema">
  <xs:element name="Person" nillable="true" type="Person" />
  <xs:complexType name="Person">
    <xs:sequence>
      <xs:element minOccurs="0" maxOccurs="1" name="FirstName" type="xs:string" />
      <xs:element minOccurs="0" maxOccurs="1" name="LastName" type="xs:string" />
      <xs:element minOccurs="1" maxOccurs="1" name="Age" type="xs:int" />
    </xs:sequence>
  </xs:complexType>
</xs:schema>

The class created (in C#) looks like the following (comments removed)

[System.CodeDom.Compiler.GeneratedCodeAttribute("xsd", "4.0.30319.17929")]
[System.SerializableAttribute()]
[System.Diagnostics.DebuggerStepThroughAttribute()]
[System.ComponentModel.DesignerCategoryAttribute("code")]
[System.Xml.Serialization.XmlRootAttribute(Namespace="", IsNullable=true)]
public partial class Person {
    
    private string firstNameField;
    
    private string lastNameField;
    
    private int ageField;
    
    public string FirstName {
        get {
            return this.firstNameField;
        }
        set {
            this.firstNameField = value;
        }
    }
    
    public string LastName {
        get {
            return this.lastNameField;
        }
        set {
            this.lastNameField = value;
        }
    }
    
    public int Age {
        get {
            return this.ageField;
        }
        set {
            this.ageField = value;
        }
    }
}

Creating an XML schema based on an XML file

It might be that we’ve got an XML file but no xml schema, so we’ll need to convert that to an xml schema before we can generate our classes file. Again we can use xsd.exe

xsd person.xml

the above will create an xml schema based upon the XML file, obviously this is limited to what is available in the XML file itself, so if your XML doesn’t have “optional” elements/attributes xsd.exe obviously cannot include those in the schema it produces.

Assuming we therefore started with an XML file, the person.xml, which looks like the following

<?xml version="1.0" encoding="utf-8"?>

<Person>
   <FirstName>Spoungebob</FirstName>
   <LastName>Squarepants</LastName>
   <Age>21</Age>
</Person>

Note: I’ve no idea if that is really SpongeBob’s age.

Running xsd.exe against person.xml file we get the following xsd schema

<?xml version="1.0" encoding="utf-8"?>
<xs:schema id="NewDataSet" xmlns="" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:msdata="urn:schemas-microsoft-com:xml-msdata">
  <xs:element name="Person">
    <xs:complexType>
      <xs:sequence>
        <xs:element name="FirstName" type="xs:string" minOccurs="0" />
        <xs:element name="LastName" type="xs:string" minOccurs="0" />
        <xs:element name="Age" type="xs:string" minOccurs="0" />
      </xs:sequence>
    </xs:complexType>
  </xs:element>
  <xs:element name="NewDataSet" msdata:IsDataSet="true" msdata:UseCurrentLocale="true">
    <xs:complexType>
      <xs:choice minOccurs="0" maxOccurs="unbounded">
        <xs:element ref="Person" />
      </xs:choice>
    </xs:complexType>
  </xs:element>
</xs:schema>

From this we could now create our classes as previously outlined.

Creating an xml schema based on .NET type

What if we’ve got a class/type and we want to serialize it as XML, let’s use xsd.exe to create the XML schema for us.

If the class looks like the following

public class Person
{
   public string FirstName { get; set; }
   public string LastName { get; set; }
   public int Age { get; set; }
}
[code]

<em>Note: Assuming the class is compiled into an assembly call DomainObjects.dll</em>

Then running xsd.exe with the following command line

[code language="xml"]
xsd.exe DomainObjects.dll /type:Person

will then generate the following xml schema

<?xml version="1.0" encoding="utf-8"?>
<xs:schema elementFormDefault="qualified" xmlns:xs="http://www.w3.org/2001/XMLSchema">
  <xs:element name="Person" nillable="true" type="Person" />
  <xs:complexType name="Person">
    <xs:sequence>
      <xs:element minOccurs="0" maxOccurs="1" name="FirstName" type="xs:string" />
      <xs:element minOccurs="0" maxOccurs="1" name="LastName" type="xs:string" />
      <xs:element minOccurs="1" maxOccurs="1" name="Age" type="xs:int" />
    </xs:sequence>
  </xs:complexType>
</xs:schema>

You’ll notice this is slightly different from the code generated from the person.xml file.

Messing around with JustMock lite

I’ve been trying out JustMock Lite (hereon known as JML), from Telerik – the lite version is free and source available on Github. The package is installable via NuGet.

So let’s start with a simple Arrange-Act-Assert sample

IFeed feed = Mock.Create<IFeed>();

// arrange
Mock.Arrange(() => feed.Update(10)).Returns(true).OccursOnce();

// act
feed.Update(10);

// assert
Mock.Assert(feed);

The example above shows how we create a mock object based upon the IFeed interface. We then arrange the mocked methods etc. The next step in the sample above is where we use the mocked methods before finally setting assertions.

Note: We do not get a “mock” type back from Mock.Create as we would with a framework like Moq, but instead we get the IFeed which I rather like, not having to use the mock’s Object property to get the type being mocked. This is because in Moq the setup/arrange phase and for that matter the assert phase are all instance methods on the mock object, in JML we use static methods on the Mock class

Loose vs Strict

By default JML creates mocks with Behavior.Loose which means that we don’t need to supply all calls on the mock object upfront via the arrange mechanism. In other words, using Behavior.Loose simply means we might make calls on a mocked object’s methods (for example) without having to explicitly setup the Arrange calls and we’ll get default beaviour. Behavior.Strict means any calls we make on the mock object must have been set-up prior to being called on the mocked object.

Let’s look at an example of using JML’s strict behaviour

public interface IReader
{
   IEnumerable<string> ReadLine();
   string ReadToEnd();
}

[Fact]
public void ReadLine_EnsureCsvReaderUsesUnderlyingReader()
{
   IReader reader = Mock.Create<IReader>(Behavior.Strict);

   Mock.Arrange(() => reader.ReadLine()).Returns((IEnumerable<string>)null);

   CsvReader csv = new CsvReader(reader);
   csv.ReadLine();

   Mock.Assert(reader);
}

In the above, assuming (for the moment) that csv.ReadLine() calls the IReader ReadLine method, then all will be work. But if we remove the Mock.Arrange call we’ll get a StrictMockException as we’d expect as we’ve not setup the Arrange calls. Switching to Behavior.Loose in essence gives us a default implementation of the IReader ReadLine (as we’ve not explicitly provided one via the Mock.Arrange method) and all will work again.

As per other mocking frameworks this simply means if we want to enforce a strict requirement for each call on our mocked object to first be arranged, then we must do this explicitly.

JML also has two other behaviors, Behavior.RecursiveLoose which allows us to create loose mocking on all levels of the mocked object.

The Behavior.CallOriginal sets the mock object up to, by default, call the actual mocked object’s methods/properties. Obviously this means it cannot be used on an interface or abstract method, but what it does mean is that we can mock a class’s virtual method/property (JustMock elevated – the commercial version of JustMock – looks like it supports non virtual/abstract mocking on classes) and by default call the original object’s methods and only Arrange those methods/properties we want to alter.

For example, the following code will pass our test as JML will call our original code and does not require we Arrange the return on the property Name

public class Reader
{
   public virtual string Name { get { return "DefaultReader"; }}
}

[Fact]
public void Name_ShouldBeAsPerTheImplementation()
{
   Reader reader = Mock.Create<Reader>(Behavior.CallOriginal);

   Assert.Equal("DefaultReader", reader.Name);

   Mock.Assert(reader);
}

In some mocking frameworks, such as Moq, will intercept the Name property call and return the default (null) value instead (assuming we’ve not setup any returns ofcourse).

More on CallOriginal

Behavior.CallOriginal sets up the mocked object as, by default, calling the original implementation code, but we can also setup Arrange calls to call the original implementation more explicitly.

For example

public class Reader
{
   public virtual string GetValue(string key)
   {
      return "default";
   }
}

Reader reader = Mock.Create<Reader>();

Mock.Arrange(() => reader.GetValue(null)).Returns("NullReader");
Mock.Arrange(() => reader.GetValue("key")).CallOriginal();

Assert.Equal("NullReader", reader.GetValue(null));
Assert.Equal("default", reader.GetValue("key"));

Mock.Assert(reader);

So here when reader.GetValue is called with the argument “key” the original (concrete implementation) or the GetValue method is called.

Note: Moq also implements such a capability using the CallBase() method

Trying FAKE out

For many years I’ve been using Nant scripts to build projects, run tests and all the other bits of fun for CI (Continuous Integration) or just running tests locally before checking in etc. Whilst I can do everything I’ve wanted to do in Nant, as a programmer, I’ve often found the XML syntax a little too verbose and in some cases confusing, so I’ve finally taken the plunge and am trying to learn to use FAKE, the F# make tool.

I don’t know whether I’ll prefer FAKE or decide I was better off with Nant – let’s see how it goes.

The whole thing

I’m going to jump straight into it by looking at a script I’ve built for building a CSV library I wrote and I’ll attempt to explain things thereafter. So here’s the whole thing…

#r @"/Development/FAKE/tools/FAKE/tools/FakeLib.dll"
open Fake

RestorePackages()

// Values
let mode = getBuildParamOrDefault "mode" "Debug"
let buildDir = "./Csv.Data/bin/" + mode + "/"
let testBuildDir = "./Csv.Data.Tests/bin/" + mode + "/"
let solution = "./Csv.Data.sln"
let testDlls = !! (testBuildDir + "/*.Tests.dll")
let xunitrunner = "C:\Tools\xUnit\xunit.console.clr4.exe"

// Targets
Target "Clean" (fun _ ->
   CleanDirs [buildDir; testBuildDir]
)

Target "Build" (fun _ ->
    !! solution
        |> 
        match mode.ToLower() with
            | "release" -> MSBuildRelease testBuildDir "Build"
            | _ -> MSBuildDebug testBuildDir "Build"
        |> Log "AppBuild-Output"
)

Target "Test" (fun _ ->
    testDlls
        |> xUnit (fun p ->
            {p with
                ShadowCopy = false;
                HtmlOutput = true;
                XmlOutput = true;
                ToolPath = xunitrunner;                
                OutputDir = testBuildDir })
)

Target "Default" (fun _ ->
   trace "Building and Running Csv.Data Tests"
)

// Dependencies
"Clean"
==> "Build"
==> "Test"
==> "Default"

RunTargetOrDefault "Default" 

Breaking it down

The first thing we need in our script is a way to include a reference to the FakeLib.dll which contains all the FAKE goodness and then ofcourse a way of using the library, this is standard F# stuff – we use the #r to reference the FakeLib.dll assembly then open Fake, as per the code below

#r @"/Development/FAKE/tools/FAKE/tools/FakeLib.dll"
open Fake

I am using NuGet packages in my project and the project is set-up to restore any NuGet packages, but we can use FAKE to carry out this task for us using

RestorePackages()

The next section in the script is probably self-explanatory but for completeness let’s look at it anyway as it does touch on a couple of things. So next we declare values for use within the script

let mode = getBuildParamOrDefault "mode" "Debug"
let buildDir = "./Csv.Data/bin/" + mode + "/"
let testBuildDir = "./Csv.Data.Tests/bin/" + mode + "/"
let solution = "./Csv.Data.sln"
let testDlls = !! (testBuildDir + "/*.Tests.dll")
let xunitrunner = "C:\Tools\xUnit\xunit.console.clr4.exe"

The first value, mode, is set to a command line parameter “mode” which allows us to define whether to create a Debug or Release build. The next three lines and the final line are pretty obvious, allowing us to create folder names based upon the selected mode and the filenames of the solution and the xUnit console runner.

The testDlls line might seem a little odd with the use of the !! (double bang). This is declared in FAKE to allow us a terse and simple way of including files using pattern matching. Whilst not used in this script we can also just as easily include and/or exclude files using ++ (to include) and (to exclude).

Back to the script and we’re now going to create the targets. Just like Nant we can declare targets which carry out specific tasks and chain them or create dependencies using these targets.

So the first target I’ve implemented is “Clean”

Target "Clean" (fun _ ->
   CleanDirs [buildDir; testBuildDir]
)

The Target name is “Clean” and we pass a function to it which is called when the target is run. We run the FAKE function CleanDirs to (you guessed it) clean the buildDir and testBuildDir folders.

The next target is the “Build” step as per the following

Target "Build" (fun _ ->
    !! solution
        |> 
        match mode.ToLower() with
            | "Release" -> MSBuildRelease testBuildDir "Build"
            | _ -> MSBuildDebug testBuildDir "Build"
        |> Log "AppBuild-Output"
)

Here we are creating an “file inclusion” using the value “solution”. Depending upon the selected “mode” we’ll either build in release mode or debug mode. At the end of the function we log the output of the build.

Next up we have a target for the unit tests. These were written using xUnit but FAKE has functions for NUnit, MSpec and MSTest (at the time of writing).

Target "Test" (fun _ ->
    testDlls
        |> xUnit (fun p ->
            {p with
                ShadowCopy = false;
                HtmlOutput = true;
                XmlOutput = true;
                ToolPath = xunitrunner;                
                OutputDir = testBuildDir })
)

As my project currently stores the test DLL’s in their own folder we pipe the testDlls value to xUnit function. Again this is using pattern matching to include all .Tests.dll files, which is the naming convention I’ve used for my tests. The xUnit function is then passed it’s parameters via the func p. In this code we’re going to create both HTML and XML output.

The final target is simply created to write a message out and not really required, but here it is anyway

Target "Default" (fun _ ->
   trace "Building and Running Csv.Data Tests"
)

Obviously we could extend this to output meaningful instructions on using the script if we wanted, for now it’s just used to write out “Building and Running Csv.Data Tests”.

The targets are completed at this point, but I want a basic dependency set-up so the build server can run all targets in order, thus cleaning the output folders, building the solution and then running the unit tests, so we write

"Clean"
==> "Build"
==> "Test"
==> "Default"

In this case, Default depends on Test being run, which depends on Build which in turn depends on the Clean target running. Hence when we run the Default target, Clean, Build and Test are executed in order then Default is run.

However it should also be noted that if we run a target, such as Build, FAKE also uses the dependency list, so shortens the dependency list to Build depending upon Clean. Hence running Build will actually also run Clean first.

The last line of the script is

RunTargetOrDefault "Default" 

This simply says if the user supplies the target then run it, or by default (when no target is supplied) run the Default target.

To run this script we use the following

Fake build.fsx          // runs the default target
Fake build.fsx Clean    // runs the Clean target
Fake build.fsx Build mode=Debug  // runs the Build target with the mode set to Debug

Currently it appears we cannot run a Target outside of the dependency list. In other words, let’s say we’ve built the code and just want to re-run the tests (maybe we build the code using Visual Studio and want a quick way to run the tests from the command line). Unfortunately I’ve not been able to find a way to do this. As mentioned earlier, running Test will run Clean then Build.