Category Archives: Programming

Downloading a file from URL using basic authentication

I had some code in an application which I work on which uses Excel to open a .csv file from a URL. The problem is that user’s have moved to Excel 2010 (yes we’re a little behind the latest versions) and basic authentication is no longer supported without registry changes (see Office file types fail to open from server).

So, to re-implement this I needed to write some code to handle the file download myself (as we’re no able to change user’s registry settings).

The code is simple enough , but I thought it’d be useful to document it here anyway

WebClient client = new WebClient();
client.Proxy = WebRequest.DefaultWebProxy;
client.Credentials = new NetworkCredential(userName, password);
client.DownloadFile(url, filename);

This code assumes that the url is supplied to this code along with a filename for where to save the downloaded file.

We use a proxy, hence the proxy is supplied, and then we supply the NetworkCredential which will handle basic authentication. Here we need to supply the userName and password, ofcourse with basic authentication these will be passed as plain text over the wire.

SQL select case

As mentioned in other posts on SQL. I’m currently working on a simple database to maintain information on the plants I’m planting (or looking to plant) this year. I’m going to use this application to write some “reminders” on using SQL.

Note: Currently I’m using SQL Server for this database, so I’ve only tried this on SQL Server.

Select

A simple example which classifies plants by their maximum height.

select Name, MaxHeight,
   case when MaxHeight > 100 then 'Tall'
        when MaxHeight > 50 and MaxHeight <= 100 then 'Medium'
        when MaxHeight > 0 and MaxHeight <= 50 then 'Small'
        else 'No Information'
        end as Height
from Plant



Results

[table “” not found /]

SQL Basics (Part 4)

In the previous SQL Basics posts, we’ve looked at querying our data, CRUD operations and transactions.

Let’s now look at creating databases, tables etc. These statements come under the acronym DDL (Database Definition Language).

Creating a database

We can create a database easily in SQL Server using

CREATE DATABASE Employee;

Now this is not accepted by every SQL type database, for example file based databases would probably not support this as the existence of the file is the database.

Creating Tables

To create a table on a database, we might use the following command in SQL Server

USE Employee;

This will ensure that we do not need to prefix our table definitions etc. with the database name. So we can create a table as

CREATE TABLE Person(
   Id int,
   FirstName nchar (20),
   LastName nchar (20),
   Age int
)

Note: the types supported by different databases may differ, but generally you’ll have numeric, string and date/time types (at least).

In the example Person table, we created a table with an Id, FirstName, LastName and Age. We’ve not created any primary key for this, so let’s now delete (or DROP) the table and try again with the following (see below for how to DROP the table)

CREATE TABLE Person(
   Id int IDENTITY(1, 1) not null,
   FirstName nchar (20) not null,
   LastName nchar (20) not null,
   Age int null,
   CONSTRAINT PK_Person PRIMARY KEY CLUSTERED
   (
      Id ASC
   )
)

Now we’ve also been more explicit in stating whether columns may be NULL (Note: NULL is the default for a column and some databases may not support explicit setting to NULL as it’s assumed NULL already). We’re using IDENTITY to create an auto incrementing ID. This may not be available in all SQL databases. Then finally we create a primary key constraint on the table named PK_Person.

DROP TABLE

To delete a table we call

DROP TABLE Person;

obviously all the data will be lost when we drop a table. We also may need to remove relationships to the Person table first, if any exist.

ALTER TABLE

As it’s name suggests, ALTER TABLE allows us to amend a TABLE. It may also be used as part of the TABLE creation process when foreign keys exist with tables which have not yet been created. i.e. we might create all the tables then ALTER TABLE to create the foreign keys etc.

Let’s just make a simple change to add a Nationality field to the table

ALTER TABLE Person 
ADD Nationality nchar(2) null

Whoops Nationality is a little on small size, so we need to alter this column, let’s use

ALTER TABLE Person 
ALTER COLUMN Nationality nchar(20)

Or we could remove the column thus

ALTER TABLE Person 
DROP COLUMN Nationality

SQL Basics (Part 3)

CRUD

In previous SQL Basics posts we’ve looked at querying the database, but we actually need to get some data into the database. So let’s go over the Create, Retrieve, Update, Delete commands.

Create

To create data in SQL we use the INSERT keyword, for example

INSERT INTO Plants (CommonName, Type_Id) VALUES ('Strawberry', 2)

This creates a new plant with the CommonName Strawberry and the Type_Id 2.

Retreieve

We’re not going to go through this again as we’ve dealt with the SELECT query already (this is the way we retrieve data using SQL).

Update

The UPDATE keyword is used to alter data with SQL.

Beware, if no WHERE clause is used then ALL rows will be updated

UPDATE Plants p SET p.CommonName = 'Strawberry' where p.Id = 123

Delete

DELETE is used to remove items from the database using SQL. We can delete one or more rows at a time but only from one table at a time, for example

DELETE FROM Plants

Beware you can easily delete ALL rows from a table if you do not specify a WHERE clause

Transactions

We’ve finished looking at the CRUD operations but you’ve probably noted some of the pitfalls of not correctly forming your DELETE or UPDATE queries. So let’s look a transactions which allow us to make changes which are not permanent until the transaction is completed.

First off, let’s look at the concept of ACID…

ACID stands for Atomic, Consistent, Isolated and finally Durable.

A transaction is said to be atomic in that it either happens or doesn’t happen, i.e. we cannot have a partially altered set of data. It’s consistent if the transaction leaves the database in a consistent state. It’s isolated in that it occurs in a serial way and finally it’s durable if it’s “permanently” stored, i.e. it’s not kept in memory but stored on disc so it will still be available after a reboot (for example).

Transaction syntax requires we tell the database we’re beginning a transaction, then we run our SQL command before either committing the transaction or rolling it back. For example

BEGIN TRANSACTION
DELETE FROM Plants
--ROLLBACK TRANSACTION
--COMMIT TRANSACTION

Now in the sample above I have started a transaction and deleted all Plants. If I now try to get all rows from the Plants table I’ll find they’re all gone. But uncommenting the ROLLBACK will allow us to cancel the transaction and return all the rows we seemed to have deleted.

Obviously had this been our intention then we could alternatively just uncomment the COMMIT transaction and commit out changes.

SQL Basics (Part 2)

JOINS

Joins allows us to aggregate multiple tables into a single result set. So for example we might want names of Plants from our plant database and information from other tables that relates to the plant.

As a more concrete example, we might have a foreign key from our Plant table which relates to the Plant Type (i.e. whether a plant is a Tree, Vegetable, Flower etc.)

CROSS JOIN

Before we look at an example of a JOIN as outlined above, let’s look at a CROSS JOIN. A CROSS JOIN is basically a JOIN of two tables without any where clause, so for example

select p.CommonName, pt.Name 
from Plants p, PlantTypes pt

This type of join is expensive in that it’s simply taking two tables and merging data, and in many ways it’s probably of little in terms of the result set produced. For example in my plant data at the moment I have 29 plants listed (not many I admit, but it’s early days). I also have 4 plant types. The result set of the CROSS join above is 4 * 29 = 116 rows. Basically the result set lists each plant CommonName against each plan type Name.

INNER JOIN

An inner join is generally the most used JOIN whereby we’re going to be looking for all items from one table matching a column to data from another table. So again using our Plants table and Plant Types we might want to see the plant type associated with each plan in our database.

Note: We can create such a join without the INNER JOIN keywords, by default in SQL Server such joins are inner joins anyway

select p.CommonName, pt.Name 
from Plants p inner join PlantTypes pt
on p.Type_Id = pt.Id

So this assumes that the Type_Id is a foreign key into the PlantTypes table and relates to it’s primary key.

FULL OUTER JOIN

A FULL OUTER JOIN will return NULL data. So for example we allowed NULL’s for our Type_Id or there are no matches for a PlantTypes Id then we’ll see NULL values in the output columns, for example the query would look like

select p.CommonName, pt.Name 
from Plants p inner join PlantTypes pt
on p.Type_Id = pt.Id

Our result set may now display CommonName NULL (for example) if a PlantType Id does not
have a matching Plant, i.e. we’ve not added any Trees yet to out Plants table.

Note: The FULL OUTER JOIN syntax is not supported by MySQL

LEFT OUTER JOIN

As we’ve seen with a FULL OUTER JOIN we might have a PlantType which is not yet used in the Plants table and therefore we’ll see a NULL for the column CommonName. However we might also have a plant which doesn’t yet have a Type_Id and hence would have a NULL in this column.

If we don’t want to view plant types which have NULL CommonNames (in other words we only really care about the plants not the types) we may want to see all plants and their plant types regardless of whether they’re NULL, we can use a LEFT OUTER JOIN

select p.CommonName, pt.Name 
from Plants p left outer join PlantTypes pt
on p.Type_Id = pt.Id

In this case we get data from the left table whether or not they have a matching value in the right hand side table.

RIGHT OUTER JOIN

As you’ve guessed a RIGHT OUTER JOIN will return all values from the plants data with a match on the plant type plus those not matching the plan type with no plants associated with it.

select p.CommonName, pt.Name 
from Plants p right outer join PlantTypes pt
on p.Type_Id = pt.Id

SELF JOIN

We’ve seen how to create joins with other tables but actually we can join with the same table. There’s no SELF JOIN keywords, it’s more the concept of joining one table with itself. The most obvious use of such a join is within a hierarchal type of data. Possibly a Plant Type might have a parent Plant Type then one could join against the parent plant type on the same table as the child plant type.

SQL basics (Part 1)

Let’s take a look at some SQL basics.

Note: Unless otherwise stated, the queries are tested in SQL Server only

Select

So to retrieve data from a database using SQL we write

select <columns> from <table>

and optionally we can add a where clause to reduce the result set based upon Boolean logic.

select <columns> from <table> where <boolean logic>

We can view all columns within a table using the wildcard * however in a production environment (at least where an application is retrieving data from the database) we would be better off specifying the columns we want to improve performance and ensure that if new columns are added or the likes, our query still produce the same “expected” result set, column-wise anyway.

So using the wild card we would have something like

select * from Plants

or specifying the required columns we use something like

select CommonName from Plants

or retrieving multiple columns

select CommonName from Plants
select CommonName, Genus from Plants

When specifying columns we might be querying from multiple tables so it’s best to alias the table name in case of column name duplication across different tables, hence we get something like

select p.CommonName from Plants p

Aliasing columns

Using the alias table query (from above) we will get a result set (in SQL Server Management Studio) with the column name CommonName. We can assign an alias to the column during a query and therefore change the name output for the column, for example

select p.CommonName as 'Common Name' from Plants p

this will output a column named Common Name now.

Aliasing can also be used on the output from functions etc. So, for example

select count(*) as Count from Plants p

will output a column named Count.

Count

Count returns a scalar value. We can get the number of rows in a table using the wildcard

select count(*) from Plants

This will return the total number of rows in the Plants table, however using a column name within count returns the number of non-NULL rows, hence

select count(p.CommonName) from Plants p

Min/Max

Both Min and Max returns a single value indicating the minimum (obviously using MIN) or maximum (obviously using MAX) non-NULL value from a column, both can be used on numeric or non-numeric columns.

Here’s an example of the usage

select max(p.Height) from Plants p

The above returns the maximum height found in the Plants table and

select min(p.Height) from Plants p

returns the minimum height.

AVG

AVG returns a single value indicating the average value within a selected column. AVG only works on numeric columns.

select avg(p.Height) from Plants p

SUM

SUM can be used on a numeric column and return the SUM of all values

select sum(p.Height) from Plants p

DISTINCT

The distinct keyword allows us to get only distinct (non-duplicating) values, i.e.

select distinct p.Genus from Plants p

The above will basically remove duplicates.

GROUP BY

We can create sub groups from our data using the GROUP BY clause. For example say we want to duplicate the DISTINCT functionality by only returning all plants with no duplicate common names we can do this with GROUP BY as

select p.CommonName from Plants p group by p.CommonName

Basically we create groups based upon the CommonName and then output each group’s CommonName. But we can think of the result set as groups (or arrays) of data so we can also do things like list a count for the number of duplicated names against each group

select p.CommonName, count(p.CommonName) from Plants p group by p.CommonName

This will now list each distinct group name and list a count alongside it to show how many items have that CommonName.

HAVING

The HAVING clause is used in a similar way to the WHERE clause but for GROUP BY result sets. An example might be where we’ve grouped by the CommonName of a plant in our database but are only interested in those names with more than a certain number of occurrences, thus

select p.CommonName, count(p.CommonName) 
from Plants p 
group by p.CommonName having count(p.CommonName) > 3

Now we’re basically get a result set with the CommonName and the Count for those CommonNames duplicated more than 3 times.

Note: In SQL Server we cannot alias the count(p.CommonName) as c, for example and then use c in the having clause, whereas MySQL does allow this syntax

Type conversions in C#

Converting one type to another

All of the primitive types, such as Int32, Boolean, String etc. implement the IConvertible interface. This means we can easily change one type to another by using

float f = (float)Convert.ChangeType("100", typeof(float));

The thing to note regarding the IConvertible type is that it’s one way, i.e. from your type which implements the IConvertible to another type, but not back (this is where the class TypeConverter, which we’ll discuss next, comes into play).

So let’s look at a simple example which converts a Point to a string, and yes before I show the code for implementing IConvertible, we could have simply overridden the ToString method (which I shall also show in the sample code).

First off let’s create a couple of tests to prove our code works. The first takes a Point and using IConvertible, will generate a string representation of the type. As it uses ToString there’s no surprise that the second test which uses the ToString method will produce the same output.

[Fact]
public void ChangeTypePointToString()
{
   Point p = new Point { X = 100, Y = 200 };
   string s = (string)Convert.ChangeType(p, typeof(string));

   Assert.Equal("(100,200)", s);
}

[Fact]
public void PointToString()
{
   Point p = new Point { X = 100, Y = 200 };

   Assert.Equal("(100,200)", p.ToString());
}

Now let’s look at our Point type, with an overridden ToString method

public struct Point : IConvertible
{
   public int X { get; set; }
   public int Y { get; set; }

   public override string ToString()
   {
      return String.Format("({0},{1})", X, Y);
   }

   // ... IConvertible methods
}

and now let’s look at a possible implementation of the IConvertible

TypeCode IConvertible.GetTypeCode()
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

bool IConvertible.ToBoolean(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

byte IConvertible.ToByte(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

char IConvertible.ToChar(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

DateTime IConvertible.ToDateTime(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

decimal IConvertible.ToDecimal(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

double IConvertible.ToDouble(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

short IConvertible.ToInt16(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

int IConvertible.ToInt32(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

long IConvertible.ToInt64(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

sbyte IConvertible.ToSByte(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

float IConvertible.ToSingle(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

string IConvertible.ToString(IFormatProvider provider)
{
   return ToString();
}

object IConvertible.ToType(Type conversionType, IFormatProvider provider)
{
   if(conversionType == typeof(string))
      return ToString();

   throw new InvalidCastException("The method or operation is not implemented.");
}

ushort IConvertible.ToUInt16(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

uint IConvertible.ToUInt32(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

ulong IConvertible.ToUInt64(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

TypeConverters

As mentioned previously, the IConvertible allows us to convert a type to one of the primitive types, but what if we want more complex capabilities, converting to and from various types. This is where the TypeConverter class comes in.

Here we develop our type as normal and then we adorn it with the TypeConverterAttribute at the struct/class level. The attribute takes a type derived from the TypeConverter class. This TypeConverter derived class does the actual type conversion to and from our adorned type.

Let’s again create a Point struct to demonstrate this on

[TypeConverter(typeof(PointTypeConverter))]
public struct Point
{
   public int X { get; set; }
   public int Y { get; set; }
}

Note: We can also declare the TypeConverter type using a string in the standard Type, Assembly format, i.e. [TypeConverter(“MyTypeConverters.PointTypeConverter, MyTypeConverters)]
if we wanted to reference the type in an external assembly.

Before we create the TypeConverter code, let’s take a look at some tests which hopefully demonstrate how we use the TypeConverter and what we expect from our conversion code.

[Fact]
public void CanConvertPointToString()
{
   TypeConverter tc = TypeDescriptor.GetConverter(typeof(Point));

   Assert.True(tc.CanConvertTo(typeof(string)));
}

[Fact]
public void ConvertPointToString()
{
   Point p = new Point { X = 100, Y = 200 };

   TypeConverter tc = TypeDescriptor.GetConverter(typeof(Point));

   Assert.Equal("(100,200)", tc.ConvertTo(p, typeof(string)));
}

[Fact]
public void CanConvertStringToPoint()
{
   TypeConverter tc = TypeDescriptor.GetConverter(typeof(Point));

   Assert.True(tc.CanConvertFrom(typeof(string)));
}

[Fact]
public void ConvertStringToPoint()
{
   TypeConverter tc = TypeDescriptor.GetConverter(typeof(Point));

   Point p = (Point)tc.ConvertFrom("(100,200)");
   Assert.Equal(100, p.X);
   Assert.Equal(200, p.Y);
}

So as you can see, to get the TypeConverter for our class we call the static method GetConverter on the TypeDescriptor class. This returns an instance of our TypeConverter (in this case our PointTypeConverter). From this we can check whether the type converter can convert to on from a type and then using the ConvertTo or ConvertFrom methods on the TypeConverter we can convert the type.

The tests above show that we expect to be able to convert a Point to a string where the string takes the format “(X,Y)”. So let’s look at an implementation for this

Note: note, this is an example of how we might implement this code and does not have full error handling, but hopefully gives a basic idea of what you might implement.

public class PointTypeConverter : TypeConverter
{
   public override bool CanConvertTo(ITypeDescriptorContext context, 
            Type destinationType)
   {
      return (destinationType == typeof(string)) || 
         base.CanConvertTo(context, destinationType);
   }

   public override object ConvertTo(ITypeDescriptorContext context, 
            CultureInfo culture, 
            object value, 
            Type destinationType)
   {
      if (destinationType == typeof(string))
      {
         Point pt = (Point)value;
         return String.Format("({0},{1})", pt.X, pt.Y);
       }
       return base.ConvertTo(context, culture, value, destinationType);
   }

   public override bool CanConvertFrom(ITypeDescriptorContext context, 
            Type sourceType)
   {
      return (sourceType == typeof(string)) ||
         base.CanConvertFrom(context, sourceType);
   }

   public override object ConvertFrom(ITypeDescriptorContext context, 
            CultureInfo culture, 
            object value)
   {
      string s = value as string;
      if (s != null)
      {
         s = s.Trim();

         if(s.StartsWith("(") && s.EndsWith(")"))
         {
            s = s.Substring(1, s.Length - 2);

            string[] parts = s.Split(',');
            if (parts != null && parts.Length == 2)
            {
               Point pt = new Point();
               pt.X = Convert.ToInt32(parts[0]);
               pt.Y = Convert.ToInt32(parts[1]);
               return pt;
            }
         }
      }
      return base.ConvertFrom(context, culture, value);
   }
}

How to, conditionally, stop XML serializing properties

Let’s assume we have this simple C# class which represents some XML data (i.e. it’s serialized to XML eventually)

[XmlType(AnonymousType = true)]
public partial class Employee
{
   [XmlAttribute(AttributeName = "id")]
   public string Id { get; set; }

   [XmlAttribute(AttributeName = "name")]
   public string Name { get; set; }

   [XmlAttribute(AttributeName = "age")]
   public int Age { get; set; }
}

Under certain circumstances we may prefer to not include elements in the XML if the values are not suitable.

We could handle this is a simplistic manner by setting a DefaultValueAttribute on a property and obviously the data will not be serialized unless the value differs from the default, but this is not so useful if you wanted more complex functionality to decide whether a value should be serialized or not, for example what if we don’t want to serialize Age if it’s less than 1 or greater than 100. Or not serialize Name if it’s empty, null or the string length is less than 3 characters and so on.

ShouldSerializeXXX

Note: You should not use ShouldSerializeXXX method and the DefaultValueAttribute on the same property

So, we can now achieve this more complex logic using the ShouldSerializeXXX method. If we create a partial class (shown below) and add the ShouldSerializName method we can tell the serializer to not bother serializing the Name property under these more complex circumstances

public partial class Employee
{
   public bool ShouldSerializeName()
   {
      return !String.IsNullOrEmpty(Name) || Name.Length < 3;
   }
}

When serializing this data the methods are called by the serializer to determine whether a property should be serialized and obviously if it should not be, then the element/attribute will not get added to the XML.

Entity Framework – lazy & eager loading

By default Entity Framework will lazy load any related entities. If you’ve not come across Lazy Loading before it’s basically coding something in such a way that either the item is not retrieved and/or not created until you actually want to use it. For example, the code below shows the AlternateNames list is not instantiated until you call the property.

public class Plant
{
   private IList<AlternateName> alternateNames;

   public virtual IList<AlternateName> AlternateNames
   {
      get
      {
         return alternateNames ?? new List<AlternateName>();
      }
   }
}

So as you can see from the example above we only create an instance of IList when the AlternateNames property is called.

As stated at the start of this post, by default Entity Framework defaults to lazy loading which is perfect in most scenarios, but let’s take one where it’s not…

If you are returning an instance of an object (like Plant above), AlternateNames is not loaded until it’s referenced, however if you were to pass the Plant object over the wire using something like WCF, AlternateNames will not get instantiated. The caller/client will try to access the AlternateNames property and of course it cannot now be loaded. What we need to do is ensure the object is fully loaded before passing it over the wire. To do this we need to Eager Load the data.

Eager Loading is the process of ensuring a lazy loaded object is fully loaded. In Entity Framework we achieve this using the Include method, thus

return context.Plants.Include("AlternateNames");

Comparing Moq and JustMock lite

This is not meant as a “which is best” post or even a feature blow by blow comparison but more a “I’m using JustMock lite (known henceforth as JML) how do I do this in Moq” or vice versa.

Please note, MOQ version 4.20 has introduced a SponsoreLink which appears to send data to some third party. See discussions on GitHub.

For this post I’m using Moq 4.2.1402.2112 and JML 20014.1.1424.1

For the code samples, I’m writing xUnit tests but I’m not necessarily going to write code to use the mocks but will instead call directly on the mocks to demonstrate solely how they would work. Such tests would obviously only really ensure the mocking framework worked as expected, but hopefully the ideas of the mocks usage is conveyed in as little code as possible.

Strict behaviour

By default both Moq and JML use loose behavior meaning simply that if we do not create any Setup or Arrange code for methods/properties being mocked, then the mocking framework will default them. When using strict behavior we are basically saying if a method or property is called on the mock object and we’ve not setup any behaviors for the mock object, then the mocking framework should fail – meaning we’ll get an exception from the mocking framework.

Following is an example of using the strict behavior – removing the Setup/Arrange will cause a mocking framework exception, adding the Setup/Arrange will fulfill the strict behavior and allow the code to complete

Using Moq

Mock<IFeed> feed = new Mock<IFeed>(MockBehavior.Strict);

feed.Setup(f => f.GetTitle()).Returns("");

feed.Object.GetTitle();

Using JML

IFeed feed = Mock.Create<IFeed>(Behavior.Strict);

Mock.Arrange(() => feed.GetTitle()).Returns("");

feed.GetTitle();

Removing the MockBehavior.Strict/Behavior.Strict from the mock calls will switch to loose behaviors.

Ensuring a mocked method/property is called n times

Occasionally we want to ensure that a method/property is called a specified number of times only, for example, once, at least n times, at most n etc.

Using Moq

Mock<IFeed> feed = new Mock<IFeed>();

feed.Setup(f => f.GetTitle()).Returns("");

feed.Object.GetTitle();

feed.Verify(f => f.GetTitle(), Times.Once);

Using JML

IFeed feed = Mock.Create<IFeed>();

Mock.Arrange(() => feed.GetTitle()).Returns("").OccursOnce();

feed.GetTitle();

Mock.Assert(feed);

In both examples we could change OccursOnce()/Times.Once to OccursNever()/Times.Never or Occurs(2)/Times.Exactly(2) and so on.

Throwing exceptions

On occasion we may want to mock an exception, maybe our IFeed throws a WebException if it cannot download data from a website, we want to simulate this on our mock object -then we can use the following

Using Moq

Mock<IFeed> feed = new Mock<IFeed>();

feed.Setup(f => f.Download()).Throws<WebException>();

Assert.Throws<WebException>(() => feed.Object.Download());

feed.Verify();

Using JML

IFeed feed = Mock.Create<IFeed>();

Mock.Arrange(() => feed.Download()).Throws<WebException>();

Assert.Throws<WebException>(() => feed.Download());

Mock.Assert(feed);

Supporting multiple interfaces

Occasionally we might be mocking an interface, such as IFeed but our application will check if the IFeed object also supports IDataErrorInfo (for example) and handle the code accordingly. So, without actually changing the IFeed what we would expect is a concrete class which implements both interfaces.

Using Moq

Mock<IFeed> feed = new Mock<IFeed>();
feed.As<IDataErrorInfo>();

Assert.IsAssignableFrom(typeof(IDataErrorInfo), feed.Object);

Using JML

IFeed feed = Mock.Create<IFeed>(r => r.Implements<IDataErrorInfo>());

Assert.IsAssignableFrom(typeof(IDataErrorInfo), feed);

As you can see, we add interfaces to our mock in Moq by using the As method and in JML using the Implements method, we can change these methods together to also add further interfaces to our mock as per

Using Moq

Mock<IFeed> feed = new Mock<IFeed>();
feed.As<IDataErrorInfo>().
     As<INotifyPropertyChanged>();

Assert.IsAssignableFrom(typeof(IDataErrorInfo), feed.Object);
Assert.IsAssignableFrom(typeof(INotifyPropertyChanged), feed.Object);

Using JML

IFeed feed = Mock.Create<IFeed>(r => 
   r.Implements<IDataErrorInfo>().
     Implements<INotifyPropertyChanged>());

Assert.IsAssignableFrom(typeof(IDataErrorInfo), feed);
Assert.IsAssignableFrom(typeof(INotifyPropertyChanged), feed);

Automocking

One of the biggest problems when unit testing using mocks is when a system under test (SUT) requires many parts to be mocked and setup, or if the code for the SUT changes often, requiring refactoring of tests to simply add/change etc. the mock objects used.

As you’ve already seen with Loose behavior we can get around the need to setup every single bit of code and thus concentrate our tests on specific areas without creating a thousand and one mock’s and setup/arrange sections of code. But in a possibly ever changing SUT it would be good if we didn’t need to continually add/remove mocks which we might not be testing against.

What would be nice is if the mocking framework could work like an IoC system and automatically inject the mocks for us – this is basically what auto mocking is about.

So if we look at the code below, assume for a moment that initially the code didn’t include IProxySettings, we write our IFeedList mock and write the code to test the RssReader, now we add a new interface IProxySettings and now we need to alter the tests to include this interface even though our current test code doesn’t need it. Ofcourse with the addition of a single interface this may seem to be a little over the top, however it can easily get a lot worse.

So here’s the code…

System under test and service code

public interface IFeedList
{
   string Download();
}

public interface IProxySettings
{		
}

public class RssReader
{
   private IFeedList feeds;
   private IProxySettings settings;

   public RssReader(IProxySettings settings, IFeedList feeds)
   {
      this.settings = settings;
      this.feeds = feeds;
   }

   public string Download()
   {
      return feeds.Download();
   }
}

Now when the auto mocking container mocks the RssReader, it will automatically inject mocks for the two interfaces, then it’s up to our test code to setup or arrange expectations etc. on it.

Using Moq

Unlike the code you will see (further below) for JML, Moq doesn’t come with a auto mock container by default (JML NuGet’s package will add the Telerik.JustMock.Container by default). Instead Moq appears to have several auto mocking containers created for use with it by the community at large. I’m going to concentrate on Moq.Contrib which includes the AutoMockContainer class.

MockRepository repos = new MockRepository(MockBehavior.Loose);
AutoMockContainer container = new AutoMockContainer(repos);

RssReader rss = container.Create<RssReader>();

container.GetMock<IFeedList>().Setup(f => f.Download()).Returns("Data");

Assert.Equal("Data", rss.Download());

repos.VerifyAll();

Using JML

var container = new MockingContainer<RssReader>();

container.Arrange<IFeedList>(f => f.Download()).Returns("Data");

Assert.Equal("Data", container.Instance.Download());

container.AssertAll();

In both cases the auto mock container created our RssReader, mocking the interfaces passed to it.

That’s it for now, I’ll add further comparisons as and when I get time.