Category Archives: C#

Using protobuf-net in C#

protobuf-net is a .NET library around the Google Protocol Buffers.

For information on the actual Google Protocol Buffers you can checkout the Google documentation.

To get the library via NuGet you can use Install-Package protobuf-net from the Package Manager Console or locate the same from the NuGet UI.

To quote Implementing Google Protocol Buffers using C#, “Protocol Buffers are not designed to handle large messages. If you are dealing in messages larger than a megabyte each, it may be time to consider an alternate strategy. Protocol Buffers are great for handling individual messages within a large data set. Usually, large data sets are really just a collection of small pieces, where each small piece may be a structured piece of data.”

So Protocol Buffers are best used on small sets of of data, but let’s start coding and see how to use Protocol Buffers using protobuf-net.

Time to code

There are a couple of ways of making your classes compliant with the protobuf-net library, the first is to use attributes and the second without attributes, instead setting up the meta data yourself.

Let’s look at an example from the protobuf-net website, a Person class which contains an Address class and other properties.

[ProtoContract]
public class Person 
{
   [ProtoMember(1)]
   public int Id { get; set; }
   [ProtoMember(2)]
   public string Name { get; set; }
   [ProtoMember(3)]
   public Address Address { get; set;}
}

[ProtoContract]
public class Address 
{
   [ProtoMember(1)]
   public string Line1 {get;set;}
   [ProtoMember(2)]
   public string Line2 {get;set;}
}

As can be seen the classes to be serialized are marked with the ProtoContractAttribute. By default protobuf-net expects you to mark your objects with attributes but as already mentioned, you can also use classes without the attributes as we’ll see soon.

The ProtoMemberAttribute marks each property to be serialized and should contain a unique, positive integer. These identifiers are serialized as opposed to the property name itself (for example) and thus you can change the property name but not the ProtoMemberAttribute number. Obviously, as this value is serialized the smaller the number the better, i.e. a large number will take up unnecessary space.

Serialize/Deserialize

Once we’ve defined the objects we want to serialize with information to tell the serializer the id’s and data then we can actually start serializing and deserializing some data. So here goes. Assuming that we’ve created a Person object and assigned it to the variable person then we can do the following to serialize this instance of the Person object

using (var fs = File.Create("test.bin"))
{
   Serializer.Serialize(fs, person);
}

and to deserialize we can do the following

Person serlizedPerson;
using (var fs = File.OpenRead("test.bin"))
{
   Person person = Serializer.Deserialize<Person>(fs);
   // do something with person
}

Note: Protocol Buffers is a binary serialization protocol

Now without attributes

As mentioned previously, we might not be able to alter the class definition or simply prefer to not use attributes, in which case we need to setup the meta data programmatically. So let’s redefine Person and Address just to be perfectly clear

public class Person 
{
   public int Id { get; set; }
   public string Name { get; set; }
   public Address Address { get; set;}
}
    
public class Address 
{
   public string Line1 {get;set;}
   public string Line2 {get;set;}
}

Prior to serialization/deserialization we would write something like

var personMetaType = RuntimeTypeModel.Default.Add(typeof (Person), false);
personMetaType.Add(1, "Id");
personMetaType.Add(2, "Name");
personMetaType.Add(3, "Address");

var addressMetaType = RuntimeTypeModel.Default.Add(typeof(Address), false);
addressMetaType.Add(1, "Line1");
addressMetaType.Add(2, "Line2");

as you can see we supply the identifier integer and then the property name.

RuntimeTypeModel.Default is used to setup the configuration details for our types and their properties.

Inheritance

Like the SOAP serialization and the likes, when we derive a new class from a type we need to mark the base type with an attribute telling the serializer what types it might expect. So for example if we added the following derived types

[ProtoContract]
public class Male : Person
{		
}

[ProtoContract]
public class Female : Person
{	
}	

we’d need to update our Person class to look something like

[ProtoContract]
[ProtoInclude(10, typeof(Male))]
[ProtoInclude(11, typeof(Female))]
public class Person 
{
   // properties
}

Note: the identifiers 10 and 11 are again unique positive integers but must be unique to the class, so for example no other ProtoIncludeAttribute or ProtoMemberAttribute within the class should have the same identifier.

Without attributes we simply AddSubType to the personMetaType defined previous, so for example we would add the following code to our previous example of setting up the metadata

// previous metadata configuration
personMetaType.AddSubType(10, typeof (Male));
personMetaType.AddSubType(11, typeof(Female));

// and now add the new types
RuntimeTypeModel.Default.Add(typeof(Male), false);
RuntimeTypeModel.Default.Add(typeof(Female), false);

Alternative Implementations of Protocol Buffers for .NET

protobuf-csharp-port is written by Jon Skeet and appears to be closer related to the Google code using .proto files to describe the messages.

Entity Framework & AutoMapper with navigational properties

I’ve got a webservice which uses EF to query SQL Server for data. The POCO’s for the three tables we’re interested in are listed below:

public class Plant
{
   public int Id { get; set; }

   public PlantType PlantType { get; set; }
   public LifeCycle LifeCycle { get; set; }

  // other properties
}

public class PlantType
{
   public int Id { get; set; }
   public string Name { get; set; }
}

public class LifeCycle
{
   public int Id { get; set; }
   public string Name { get; set; }
}

The issue is that when a new plant is added (or updated for that matter) using the AddPlant (or UpdatePlant) method we need to ensure EF references the LifeCycle and PlantType within its context. i.e. if we try to simply call something like

context.Plants.Add(newPlant);

then (and even though the LifeCycle and PlantType have an existing Id in the database) EF appears to create new PlantTypes and LifeCycles. Thus giving us multiple instances of the same LifeCycle or PlantType name. For the update method I’ve been using AutoMapper to map all the properties, which works well except for the navigational properties. The problem with EF occurs.

I tried several ways to solve this but kept hitting snags. For example we need to get the instance of the PlantType and LifeCycle from the EF context and assign these to the navigational properties to solve the issue of EF adding new PlantTypes etc. I wanted to achieve this in a nice way with AutoMapper. By default the way to create mappings in AutoMapper is with the static Mapper class which suggests the mappings should not change based upon the current data, so what we really need is to create mappings for a specific webservice method call.

To create an instance of the mapper and use it we can do the following (error checking etc. remove)

using (PlantsContext context = new PlantsContext())
{
   var configuration = new ConfigurationStore(
                     new TypeMapFactory(), MapperRegistry.AllMappers());
   var mapper = new MappingEngine(configuration);
   configuration.CreateMap<Plant, Plant>()
         .ForMember(p => p.Type, 
            c => c.MapFrom(pl => context.PlantTypes.FirstOrDefault(pt => pt.Id == pl.Type.Id)))
	 .ForMember(p => p.LifeCycle, 
            c => c.MapFrom(pl => context.LifeCycles.FirstOrDefault(lc => lc.Id == pl.LifeCycle.Id)));

   //... use the mapper.Map to map our data and then context.SaveChanges() 
}

So it can be seen that we can now interact with the instance of the context to find the PlantType and LifeCycle to map and we do not end up trying to create mappings on the static class.

The Vending Machine Change problem

I was reading about the “Vending Machine Change” problem the other day. This is a well known problem, which I’m afraid to admit I had never heard of, but I thought it was interesting enough to take a look at now.

Basically the problem that we’re trying to solve is – you need to write the software to calculate the minimum number of coins required to return an amount of change to the user. In other words if a vending machine had the coins 1, 2, 5 & 10, what is the minimum number of coins required to make up the change of 43 pence (or whatever units of currency you want to use).

The coin denominations should be supplied, so the algorithm is not specific to the UK or any other country and the amount in change should also be supplied to the algorithm.

First Look

This is a standard solution to the Vending Machine problem (please note: this code is all over the internet in various languages, I’ve just made a couple of unimpressive changes to it)

static int Calculate(int[] coins, int change)
{
   int[] counts = new int[change + 1];
   counts[0] = 0;

   for(int i = 1; i <= change; i++)
   {
      int count = int.MaxValue;
      foreach(int coin in coins)
      {
         int total = i - coin;
         if(total >= 0 && count > counts[total])
         {
            count = counts[total];
         }
      }
      counts[i] = (count < int.MaxValue) ? count + 1 : int.MaxValue;
   }
   return counts[change];
}

What happens in this code is that we create an array counts which will contains the minimum number of coins for each value between 1 and the amount of change required. We use the 0 index as a counter start value (hence we set counts[0] to 0).

Next we look through each possible change value calculating the number of coins required to make up each value, we use the int.MaxValue to indicate no coins could be found to match the amount of change.

This algorithm assumes an infinite number of coins of each denomination, but this is obviously an unrealistic scenario, so let’s have a look at my attempt to solve this problem for a finite number of each coin.

Let’s try to make things a little more complex

So, as mentioned above, I want to now try to calculate the minimum number of coins to produce the amount of change required, where the number of coins of each denomination is finite.

Let’s start by defining a Coin class

public class Coin
{
   public Coin(int denomition, int count)
   {
      Denomination = denomition;
      Count = count;
   }

   public int Denomination { get; set; }
   public int Count { get; set; }
}

Before I introduce my attempt at a solution, let’s write some tests, I’ve not got great names for many of the tests, they’re mainly for me to try different test scenarios out, but I’m sure you get the idea.

public class VendingMachineTests
{
   private bool Expects(IList<Coin> coins, int denomination, int count)
   {
      Coin c = coins.FirstOrDefault(x => x.Denomination == denomination);
      return c == null ? false : c.Count == count;
   }

   [Fact]
   public void Test1()
   {
      List<Coin> coins = new List<Coin>
      {
         new Coin(10, 100),
         new Coin(5, 100),
         new Coin(2, 100),
         new Coin(1, 100),
      };

      IList<Coin> results = VendingMachine.Calculate(coins, 15);
      Assert.Equal(2, results.Count);
      Assert.True(Expects(results, 10, 1));
      Assert.True(Expects(results, 5, 1));
   }

   [Fact]
   public void Test2()
   {
      List<Coin> coins = new List<Coin>
      {
         new Coin(10, 100),
         new Coin(5, 100),
         new Coin(2, 100),
         new Coin(1, 100),
      };

      IList<Coin> results = VendingMachine.Calculate(coins, 1);
      Assert.Equal(1, results.Count);
      Assert.True(Expects(results, 1, 1));
   }

   [Fact]
   public void Test3()
   {
      List<Coin> coins = new List<Coin>
      {
         new Coin(10, 1),
         new Coin(5, 1),
         new Coin(2, 100),
         new Coin(1, 100),
      };

      IList<Coin> results = VendingMachine.Calculate(coins, 20);
      Assert.Equal(4, results.Count);
      Assert.True(Expects(results, 10, 1));
      Assert.True(Expects(results, 5, 1));
      Assert.True(Expects(results, 2, 2));
      Assert.True(Expects(results, 1, 1));
   }

   [Fact]
   public void NoMatchDueToNoCoins()
   {
      List<Coin> coins = new List<Coin>
      {
         new Coin(10, 0),
         new Coin(5, 0),
         new Coin(2, 0),
         new Coin(1, 0),
      };

      Assert.Null(VendingMachine.Calculate(coins, 20));
   }

   [Fact]
   public void NoMatchDueToNotEnoughCoins()
   {
      List<Coin> coins = new List<Coin>
      {
         new Coin(10, 5),
         new Coin(5, 0),
         new Coin(2, 0),
         new Coin(1, 0),
      };

      Assert.Null(VendingMachine.Calculate(coins, 100));
   }

   [Fact]
   public void Test4()
   {
      List<Coin> coins = new List<Coin>
      {
         new Coin(10, 1),
         new Coin(5, 1),
         new Coin(2, 100),
         new Coin(1, 100),
      };

      IList<Coin> results = VendingMachine.Calculate(coins, 3);
      Assert.Equal(2, results.Count);
      Assert.True(Expects(results, 2, 1));
      Assert.True(Expects(results, 1, 1));
   }

   [Fact]
   public void Test5()
   {
      List<Coin> coins = new List<Coin>
      {
         new Coin(10, 0),
         new Coin(5, 0),
         new Coin(2, 0),
         new Coin(1, 100),
      };

      IList<Coin> results = VendingMachine.Calculate(coins, 34);
      Assert.Equal(1, results.Count);
      Assert.True(Expects(results, 1, 34));
   }

   [Fact]
   public void Test6()
   {
      List<Coin> coins = new List<Coin>
      {
         new Coin(50, 2),
         new Coin(20, 1),
         new Coin(10, 4),
         new Coin(1, int.MaxValue),
      };

      IList<Coin> results = VendingMachine.Calculate(coins, 98);
      Assert.Equal(4, results.Count);
      Assert.True(Expects(results, 50, 1));
      Assert.True(Expects(results, 20, 1));
      Assert.True(Expects(results, 10, 2));
      Assert.True(Expects(results, 1, 8));
   }

   [Fact]
   public void Test7()
   {
      List<Coin> coins = new List<Coin>
      {
         new Coin(50, 1),
         new Coin(20, 2),
         new Coin(15, 1),
         new Coin(10, 1),
         new Coin(1, 8),
      };

      IList<Coin> results = VendingMachine.Calculate(coins, 98);
      Assert.Equal(3, results.Count);
      Assert.True(Expects(results, 50, 1));
      Assert.True(Expects(results, 20, 2));
      Assert.True(Expects(results, 1, 8));
   }
}

Now, here’s the code for my attempt at solving this problem, it uses a “Greedy” algorithm, i.e. trying to find the largest coin(s) first. The code therefore requires that the coins are sorted largest to smallest, I have not put the sort within the Calculate method because it’s recursively called, so it’s down to the calling code to handle this.

There may well be a better way to implement this algorithm, obviously one might prefer to remove the recursion (I may revisit the code when I have time to implement that change).

public static class VendingMachine
{
   public static IList<Coin> Calculate(IList<Coin> coins, int change, int start = 0)
   {
      for (int i = start; i < coins.Count; i++)
      {
         Coin coin = coins[i];
         // no point calculating anything if no coins exist or the 
         // current denomination is too high
         if (coin.Count > 0 && coin.Denomination <= change)
         {
            int remainder = change % coin.Denomination;
            if (remainder < change)
            {
               int howMany = Math.Min(coin.Count, 
                   (change - remainder) / coin.Denomination);

               List<Coin> matches = new List<Coin>();
               matches.Add(new Coin(coin.Denomination, howMany));

               int amount = howMany * coin.Denomination;
               int changeLeft = change - amount;
               if (changeLeft == 0)
               {
                   return matches;
               }

               IList<Coin> subCalc = Calculate(coins, changeLeft, i + 1);
               if (subCalc != null)
               {
                  matches.AddRange(subCalc);
                  return matches;
               }
            }
         }
      }
      return null;
   }
}

Issues with this solution

Whilst this solution does the job pretty well, it’s not perfect. If we had the coins, 50, 20, 11, 10 and 1 the optimal minimum number of coins to find the change for 33 would be 3 * 11 coins. But in the algorithm listed above, the result would be 1 * 20, 1 * 11 and then 2 * 1 coins.

Ofcourse the above example assumed we have 3 of the 11 unit coins in the Vending machine

To solve this we could call the same algorithm but for each call we remove the largest coin type each time. Let’s look at this by first adding the unit test

[Fact]
public void Test8()
{
   List<Coin> coins = new List<Coin>
   {
      new Coin(50, 1),
      new Coin(20, 2),
      new Coin(11, 3),
      new Coin(10, 1),
      new Coin(1, 8),
   };

   IList<Coin> results = VendingMachine.CalculateMinimum(coins, 33);
   Assert.Equal(1, results.Count);
   Assert.True(Expects(results, 11, 3));
}

To just transition the previous solution to the new improved solution, we’ve simply added a new method named CalculateMinimum. The purpose of this method is to try and find the best solution by calculate with all coins, then we reduce the types of available coins by remove the largest coin, then find the best solution, then remove the next largest coin and so on. Here’s some code which might better demonstrate this

public static IList<Coin> CalculateMinimum(IList<Coin> coins, int change)
{
   // used to store the minimum matches
   IList<Coin> minimalMatch = null;
   int minimalCount = -1;

   IList<Coin> subset = coins;
   for (int i = 0; i < coins.Count; i++)
   {
      IList<Coin> matches = Calculate(subset, change);
      if (matches != null)
      {
         int matchCount = matches.Sum(c => c.Count);
         if (minimalMatch == null || matchCount < minimalCount)
         {
            minimalMatch = matches;
            minimalCount = matchCount;
         }
      }
      // reduce the list of possible coins
      subset = subset.Skip(1).ToList();
   }

   return minimalMatch;
}

Performance wise, this (in conjunction with the Calculate method) are sub-optimal if we needed to calculate such minimum numbers of coins many times in a short period of time. A lack of state means we may end up calculating the same change multiple times. Ofcourse we might save such previous calculations and first check whether the algorithm already has a valid solution each time we calculate change if performance was a concern and/or me might calculate a “standard” set of results at start up. For example if we’re selling can’s of drinks for 80 pence we could pre-calculate change based upon likely inputs to the vending machine, i.e. 90p, £1 or £2 coins.

On and we might prefer to remove to use of the Linq code such as Skip and ToList to better utilise memory etc.

References

http://onestopinterviewprep.blogspot.co.uk/2014/03/vending-machine-problem-dynamic.html
http://codercareer.blogspot.co.uk/2011/12/no-26-minimal-number-of-coins-for.html
http://techieme.in/techieme/minimum-number-of-coins/
http://www.careercup.com/question?id=15139685

Debugging a release build of a .NET application

What’s a Release Build compared to a Debug build

Release builds of a .NET application (by default) add optimizations, remove any debug code from the build, i.e. anything inside #if DEBUG is remove as well as Trace. and Debug. calls being removed. You also have reduced debug information. However you will still have .PDB files…

PDB files are generated by the compiler if a project’s properties allow for .PDB file to be generated. Simply check the project properties, select the Build tab and the Advanced… button. You’ll see Debug Info, which can be set to full, pdb-only or none. Obviously none will not produce any .PDB files.

At this point, I do not know the differences between pdb-only and full, if I find out I’ll amend this post, but out of the box, Release builds used pdb-only whilst Debug use full.

So what are .PDB files ?

Simply put – PDB files contain symbol information which allows us to map debug information to source files when we attach a debugger and step through the code.

Debugging a Release Build

It’s often the case that we’ll create a deployment of a Release build without the PDB files, this may be due to a desire to reduce the deployment foot print or some other reason. If you cannot or do not wish to deploy the PDB’s with an application then we should store them for a specific version of a release.

Before attaching our debugger (Visual Studio) we ned to add the PDB file locations to Visual Studio. So select the Debug menu, then Options and Settings. From here select Debugging | Symbols from the tree view on the left of the Options dialog. Click on the add folder button and type in (or paste) the folder name for the symbols for the specific Release build.

Now attach Visual Studio using Debug | Attach Process and the symbols will get loaded for the build and you can now step through the source code.

Let’s look at a real example

An application I work on deploys over the network and we do not include PDB files with it so we can reduce the size of the deployment. If we find a bug only repeatable “production” we cannot step through the source code related to the build without both a version of the code related to the release and without the PDB files for that release.

What we do is, when our continuous integration server runs, it builds a specific version of the application as a release build. We embed the source repository revision into the EXE version number. This allows us to easily check out the source related to that build if need be.

During the build process, we the copy the release build to a deployment folder, again using the source code revision in the folder name. We (as already mentioned) remove the PDB files (and Tests and other such files are also obviously removed). However we don’t just throw away the PDB’s, we instead copy them to a folder similarly named to the release build but with the name Symbols within the folder name (and ofcourse with the same version number). The PDB’s are all copied to this folder and now accessible if we need to debug a release build.

Now if the Release (or a production) build is executed and an error occurs or we just need to step through code for some other reason, we can get the specific source for the deployed version, direct Visual Studio to the PDB files for that build and now step through our code.

So don’t just delete your PDB’s store them in case you need to use them in the future.

Okay, how do we use the symbol/PDB files

So, in Visual Studio (if you’re using that to debug/step through your code). Obviously open your project with the correct source for your release build.

In the Tools | Options dialog, select the Debugging parent node in the dialog and then select Symbols or ofcourse just type Symbols into the search text box in Visual Studio 2013.

Now press the folder button and type in the location of your PDB files folder. Note that this option doesn’t have a folder browse option so you’ll need to type (or copy and paste) the folder name yourself.

Ensure the folder is checked so that Visual Studio will load the symbols.

Now attach the debugger to your release build and Visual Studio will (as mentioned) locate the correct symbols and attach them and then allow you to step through your source.

See Specify Symbol (.pdb) and Source Files in the Visual Studio Debugger for more information and some screen shots of the process just described.

Downloading a file from URL using basic authentication

I had some code in an application which I work on which uses Excel to open a .csv file from a URL. The problem is that user’s have moved to Excel 2010 (yes we’re a little behind the latest versions) and basic authentication is no longer supported without registry changes (see Office file types fail to open from server).

So, to re-implement this I needed to write some code to handle the file download myself (as we’re no able to change user’s registry settings).

The code is simple enough , but I thought it’d be useful to document it here anyway

WebClient client = new WebClient();
client.Proxy = WebRequest.DefaultWebProxy;
client.Credentials = new NetworkCredential(userName, password);
client.DownloadFile(url, filename);

This code assumes that the url is supplied to this code along with a filename for where to save the downloaded file.

We use a proxy, hence the proxy is supplied, and then we supply the NetworkCredential which will handle basic authentication. Here we need to supply the userName and password, ofcourse with basic authentication these will be passed as plain text over the wire.

Type conversions in C#

Converting one type to another

All of the primitive types, such as Int32, Boolean, String etc. implement the IConvertible interface. This means we can easily change one type to another by using

float f = (float)Convert.ChangeType("100", typeof(float));

The thing to note regarding the IConvertible type is that it’s one way, i.e. from your type which implements the IConvertible to another type, but not back (this is where the class TypeConverter, which we’ll discuss next, comes into play).

So let’s look at a simple example which converts a Point to a string, and yes before I show the code for implementing IConvertible, we could have simply overridden the ToString method (which I shall also show in the sample code).

First off let’s create a couple of tests to prove our code works. The first takes a Point and using IConvertible, will generate a string representation of the type. As it uses ToString there’s no surprise that the second test which uses the ToString method will produce the same output.

[Fact]
public void ChangeTypePointToString()
{
   Point p = new Point { X = 100, Y = 200 };
   string s = (string)Convert.ChangeType(p, typeof(string));

   Assert.Equal("(100,200)", s);
}

[Fact]
public void PointToString()
{
   Point p = new Point { X = 100, Y = 200 };

   Assert.Equal("(100,200)", p.ToString());
}

Now let’s look at our Point type, with an overridden ToString method

public struct Point : IConvertible
{
   public int X { get; set; }
   public int Y { get; set; }

   public override string ToString()
   {
      return String.Format("({0},{1})", X, Y);
   }

   // ... IConvertible methods
}

and now let’s look at a possible implementation of the IConvertible

TypeCode IConvertible.GetTypeCode()
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

bool IConvertible.ToBoolean(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

byte IConvertible.ToByte(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

char IConvertible.ToChar(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

DateTime IConvertible.ToDateTime(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

decimal IConvertible.ToDecimal(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

double IConvertible.ToDouble(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

short IConvertible.ToInt16(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

int IConvertible.ToInt32(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

long IConvertible.ToInt64(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

sbyte IConvertible.ToSByte(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

float IConvertible.ToSingle(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

string IConvertible.ToString(IFormatProvider provider)
{
   return ToString();
}

object IConvertible.ToType(Type conversionType, IFormatProvider provider)
{
   if(conversionType == typeof(string))
      return ToString();

   throw new InvalidCastException("The method or operation is not implemented.");
}

ushort IConvertible.ToUInt16(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

uint IConvertible.ToUInt32(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

ulong IConvertible.ToUInt64(IFormatProvider provider)
{
   throw new InvalidCastException("The method or operation is not implemented.");
}

TypeConverters

As mentioned previously, the IConvertible allows us to convert a type to one of the primitive types, but what if we want more complex capabilities, converting to and from various types. This is where the TypeConverter class comes in.

Here we develop our type as normal and then we adorn it with the TypeConverterAttribute at the struct/class level. The attribute takes a type derived from the TypeConverter class. This TypeConverter derived class does the actual type conversion to and from our adorned type.

Let’s again create a Point struct to demonstrate this on

[TypeConverter(typeof(PointTypeConverter))]
public struct Point
{
   public int X { get; set; }
   public int Y { get; set; }
}

Note: We can also declare the TypeConverter type using a string in the standard Type, Assembly format, i.e. [TypeConverter(“MyTypeConverters.PointTypeConverter, MyTypeConverters)]
if we wanted to reference the type in an external assembly.

Before we create the TypeConverter code, let’s take a look at some tests which hopefully demonstrate how we use the TypeConverter and what we expect from our conversion code.

[Fact]
public void CanConvertPointToString()
{
   TypeConverter tc = TypeDescriptor.GetConverter(typeof(Point));

   Assert.True(tc.CanConvertTo(typeof(string)));
}

[Fact]
public void ConvertPointToString()
{
   Point p = new Point { X = 100, Y = 200 };

   TypeConverter tc = TypeDescriptor.GetConverter(typeof(Point));

   Assert.Equal("(100,200)", tc.ConvertTo(p, typeof(string)));
}

[Fact]
public void CanConvertStringToPoint()
{
   TypeConverter tc = TypeDescriptor.GetConverter(typeof(Point));

   Assert.True(tc.CanConvertFrom(typeof(string)));
}

[Fact]
public void ConvertStringToPoint()
{
   TypeConverter tc = TypeDescriptor.GetConverter(typeof(Point));

   Point p = (Point)tc.ConvertFrom("(100,200)");
   Assert.Equal(100, p.X);
   Assert.Equal(200, p.Y);
}

So as you can see, to get the TypeConverter for our class we call the static method GetConverter on the TypeDescriptor class. This returns an instance of our TypeConverter (in this case our PointTypeConverter). From this we can check whether the type converter can convert to on from a type and then using the ConvertTo or ConvertFrom methods on the TypeConverter we can convert the type.

The tests above show that we expect to be able to convert a Point to a string where the string takes the format “(X,Y)”. So let’s look at an implementation for this

Note: note, this is an example of how we might implement this code and does not have full error handling, but hopefully gives a basic idea of what you might implement.

public class PointTypeConverter : TypeConverter
{
   public override bool CanConvertTo(ITypeDescriptorContext context, 
            Type destinationType)
   {
      return (destinationType == typeof(string)) || 
         base.CanConvertTo(context, destinationType);
   }

   public override object ConvertTo(ITypeDescriptorContext context, 
            CultureInfo culture, 
            object value, 
            Type destinationType)
   {
      if (destinationType == typeof(string))
      {
         Point pt = (Point)value;
         return String.Format("({0},{1})", pt.X, pt.Y);
       }
       return base.ConvertTo(context, culture, value, destinationType);
   }

   public override bool CanConvertFrom(ITypeDescriptorContext context, 
            Type sourceType)
   {
      return (sourceType == typeof(string)) ||
         base.CanConvertFrom(context, sourceType);
   }

   public override object ConvertFrom(ITypeDescriptorContext context, 
            CultureInfo culture, 
            object value)
   {
      string s = value as string;
      if (s != null)
      {
         s = s.Trim();

         if(s.StartsWith("(") && s.EndsWith(")"))
         {
            s = s.Substring(1, s.Length - 2);

            string[] parts = s.Split(',');
            if (parts != null && parts.Length == 2)
            {
               Point pt = new Point();
               pt.X = Convert.ToInt32(parts[0]);
               pt.Y = Convert.ToInt32(parts[1]);
               return pt;
            }
         }
      }
      return base.ConvertFrom(context, culture, value);
   }
}

How to, conditionally, stop XML serializing properties

Let’s assume we have this simple C# class which represents some XML data (i.e. it’s serialized to XML eventually)

[XmlType(AnonymousType = true)]
public partial class Employee
{
   [XmlAttribute(AttributeName = "id")]
   public string Id { get; set; }

   [XmlAttribute(AttributeName = "name")]
   public string Name { get; set; }

   [XmlAttribute(AttributeName = "age")]
   public int Age { get; set; }
}

Under certain circumstances we may prefer to not include elements in the XML if the values are not suitable.

We could handle this is a simplistic manner by setting a DefaultValueAttribute on a property and obviously the data will not be serialized unless the value differs from the default, but this is not so useful if you wanted more complex functionality to decide whether a value should be serialized or not, for example what if we don’t want to serialize Age if it’s less than 1 or greater than 100. Or not serialize Name if it’s empty, null or the string length is less than 3 characters and so on.

ShouldSerializeXXX

Note: You should not use ShouldSerializeXXX method and the DefaultValueAttribute on the same property

So, we can now achieve this more complex logic using the ShouldSerializeXXX method. If we create a partial class (shown below) and add the ShouldSerializName method we can tell the serializer to not bother serializing the Name property under these more complex circumstances

public partial class Employee
{
   public bool ShouldSerializeName()
   {
      return !String.IsNullOrEmpty(Name) || Name.Length < 3;
   }
}

When serializing this data the methods are called by the serializer to determine whether a property should be serialized and obviously if it should not be, then the element/attribute will not get added to the XML.

Entity Framework – lazy & eager loading

By default Entity Framework will lazy load any related entities. If you’ve not come across Lazy Loading before it’s basically coding something in such a way that either the item is not retrieved and/or not created until you actually want to use it. For example, the code below shows the AlternateNames list is not instantiated until you call the property.

public class Plant
{
   private IList<AlternateName> alternateNames;

   public virtual IList<AlternateName> AlternateNames
   {
      get
      {
         return alternateNames ?? new List<AlternateName>();
      }
   }
}

So as you can see from the example above we only create an instance of IList when the AlternateNames property is called.

As stated at the start of this post, by default Entity Framework defaults to lazy loading which is perfect in most scenarios, but let’s take one where it’s not…

If you are returning an instance of an object (like Plant above), AlternateNames is not loaded until it’s referenced, however if you were to pass the Plant object over the wire using something like WCF, AlternateNames will not get instantiated. The caller/client will try to access the AlternateNames property and of course it cannot now be loaded. What we need to do is ensure the object is fully loaded before passing it over the wire. To do this we need to Eager Load the data.

Eager Loading is the process of ensuring a lazy loaded object is fully loaded. In Entity Framework we achieve this using the Include method, thus

return context.Plants.Include("AlternateNames");

Getting started with Linq Expressions

The Expression class is used to represent expression trees and is seen in use within LINQ. If you’ve been creating your own LINQ provider you’ll also have come across Expressions. For example see my post Creating a custom Linq Provider on this subject.

Getting started with the Expression class

Expression objects can be used in various situations…

Let’s start by looking at using Expressions to represent lambda expressions.

Expression<Func<bool>> e = () => a < b;

In the above we declare an Expression which takes a Func which takes no arguments and returns a Boolean. On the right hand side of the assignment operator we can see an equivalent lambda expression, i.e. one which takes no arguments and returns a Boolean.

From this Expression we can then get at the function within the Expression by calling the Compile method thus

Func<bool> f = e.Compile();

We could also create the same lambda expression using the Expression’s methods. For example

ConstantExpression lParam = Expression.Constant(a, typeof(int));
ConstantExpression rParam = Expression.Constant(b, typeof(int));
BinaryExpression lessThan = Expression.LessThan(lParam, rParam);
Expression<Func<bool>> e = Expression.Lambda<Func<bool>>(lessThan);

This probably doesn’t seem very exciting in itself, but if we can create an Expression from a lambda then we can also deconstruct an lambda into an Expression tree. So in the previous lambda example we could look at the left and right side of the a < b expression and find the types and other such things, we could evaluate the parts or simply traverse the expression and create database for it, but that’s a subject beyond this post.

An alternate use

An interesting use of Expressions can be found in many MVVM base classes (or the likes). I therefore take absolutely no credit for the idea.

The scenario is this. We want to create a base class for handling the INotifyPropertyChanged interface, it will look like this

public class PropertyChangedObject : INotifyPropertyChanged
{
   public event PropertyChangedEventHandler PropertyChanged;

   public void OnPropertyChanged(string propertyName)
   {
      if (PropertyChanged != null)
      {
         PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
      }
   }
}

Next let’s write a simple class to use this, such as

public class MyObject : PropertyChangedObject
{
   private string name;

   public string Name
   {
      get { return name; }
      set
      {
         if (name != value)
         {
            name = value;
            OnPropertyChanged("Name");
         }
      }
   }
}

As you can see, within the setter, we need to check whether the value stored in the Name property is different to the new value passed to it and if so, update the backing field and then raise a property changed event passing a string to represent the property name.

An obvious problem with this approach is that “magic strings” can sometimes be incorrect (i.e. spelling mistakes or the likes). So it would be nicer if we could somehow pass the property name in a more typesafe and compile time checking way. It would also be nice to wrap the whole if block in an extension method which we can reuse in all the setters on our object.

Note: before we go much further with this, in .NET 4.5 there’s a better way to implement this code. See my post on the CallerMemberNameAttribute attribute.

So one way we could pass the property name, which at least ensure the property exists at compile time, is to use an Expression object which will then include all the information we need (and more).

Here’s what we want the setter code to look like

public string Name
{
   get { return name; }
   set { this.RaiseIfPropertyChanged(p => p.Name, ref name, value); }
}

The second and third arguments are self-explanatory, but for the sake of completeness let’s review them – the second argument takes a reference to the backing field. this will be set to the value contained within the third field only if the two differ. At which point we expect an OnPropertyChanged call to be made and the PropertyChanged event to occur.

The first argument is the bit relevant to the topic of this post, i.e. the Expression class.

Let’s look at the extension method that implements this and then walk through it

public static void RaiseIfPropertyChanged<TModel, TValue>(this TModel po, 
         Expression<Func<TModel, TValue>> e,  
         ref TValue backingField, 
         TValue value) where 
            TModel : PropertyChangedObject
{
   if (!EqualityComparer<TValue>.Default.Equals(backingField, value))
   {
      var m = e.Body as MemberExpression;
      if(m != null)
      {
         backingField = value;
         po.OnPropertyChanged(m.Member.Name);
      }
   }
}

The method can be used on any type which inherits from PropertyChangedObject, this is obviously so we get the method call OnPropertyChanged.

We check the equality of the backingField and value and obviously, only if they’re different do we bother doing anything. Assuming the values are different we then get the Body of the expression as a MemberExpression, on this the Member.Name property will be a string representing the name of the property supplied in the calling property, i.e. in this example the property name “Name”.

So now when we use the RaiseIfPropertyChanged extension method we have a little more type safety, i.e. the property passed to the expression must be the same type as the backing field and value and ofcourse a mis-spelled/none existent property will fail to compile as well, so lessens the chances of “magic string” typos. Obviously if we passed another property of the same type into the Expression then this will compile and seemingly work but the OnPropertyChanged event would be passed an incorrect property string and this is where the CallerMemberNameAttribute would help us further.

Generating classes from XML using xsd.exe

The XML Schema Definition Tool (xsd.exe) can be used to generate xml schema files from XML and better still C# classes from xml schema files.

Creating classes based upon an XML schema file

So in it’s simplest usage we can simply type

xsd person.xsd /classes

and this generates C# classes representing the xml schema. The default output is C# but using the /language or the shorter form /l switch we can generate Visual Basic using the VB value, JScript using JS or CS if we wanted to explicitly static the language was to be C#. So for example using the previous command line but now to generate VB code we can write

xsd person.xsd /classes /l:VB

Assuming we have an xml schema, person.xsd, which looks like this

<?xml version="1.0" encoding="utf-8"?>
<xs:schema elementFormDefault="qualified" xmlns:xs="http://www.w3.org/2001/XMLSchema">
  <xs:element name="Person" nillable="true" type="Person" />
  <xs:complexType name="Person">
    <xs:sequence>
      <xs:element minOccurs="0" maxOccurs="1" name="FirstName" type="xs:string" />
      <xs:element minOccurs="0" maxOccurs="1" name="LastName" type="xs:string" />
      <xs:element minOccurs="1" maxOccurs="1" name="Age" type="xs:int" />
    </xs:sequence>
  </xs:complexType>
</xs:schema>

The class created (in C#) looks like the following (comments removed)

[System.CodeDom.Compiler.GeneratedCodeAttribute("xsd", "4.0.30319.17929")]
[System.SerializableAttribute()]
[System.Diagnostics.DebuggerStepThroughAttribute()]
[System.ComponentModel.DesignerCategoryAttribute("code")]
[System.Xml.Serialization.XmlRootAttribute(Namespace="", IsNullable=true)]
public partial class Person {
    
    private string firstNameField;
    
    private string lastNameField;
    
    private int ageField;
    
    public string FirstName {
        get {
            return this.firstNameField;
        }
        set {
            this.firstNameField = value;
        }
    }
    
    public string LastName {
        get {
            return this.lastNameField;
        }
        set {
            this.lastNameField = value;
        }
    }
    
    public int Age {
        get {
            return this.ageField;
        }
        set {
            this.ageField = value;
        }
    }
}

Creating an XML schema based on an XML file

It might be that we’ve got an XML file but no xml schema, so we’ll need to convert that to an xml schema before we can generate our classes file. Again we can use xsd.exe

xsd person.xml

the above will create an xml schema based upon the XML file, obviously this is limited to what is available in the XML file itself, so if your XML doesn’t have “optional” elements/attributes xsd.exe obviously cannot include those in the schema it produces.

Assuming we therefore started with an XML file, the person.xml, which looks like the following

<?xml version="1.0" encoding="utf-8"?>

<Person>
   <FirstName>Spoungebob</FirstName>
   <LastName>Squarepants</LastName>
   <Age>21</Age>
</Person>

Note: I’ve no idea if that is really SpongeBob’s age.

Running xsd.exe against person.xml file we get the following xsd schema

<?xml version="1.0" encoding="utf-8"?>
<xs:schema id="NewDataSet" xmlns="" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:msdata="urn:schemas-microsoft-com:xml-msdata">
  <xs:element name="Person">
    <xs:complexType>
      <xs:sequence>
        <xs:element name="FirstName" type="xs:string" minOccurs="0" />
        <xs:element name="LastName" type="xs:string" minOccurs="0" />
        <xs:element name="Age" type="xs:string" minOccurs="0" />
      </xs:sequence>
    </xs:complexType>
  </xs:element>
  <xs:element name="NewDataSet" msdata:IsDataSet="true" msdata:UseCurrentLocale="true">
    <xs:complexType>
      <xs:choice minOccurs="0" maxOccurs="unbounded">
        <xs:element ref="Person" />
      </xs:choice>
    </xs:complexType>
  </xs:element>
</xs:schema>

From this we could now create our classes as previously outlined.

Creating an xml schema based on .NET type

What if we’ve got a class/type and we want to serialize it as XML, let’s use xsd.exe to create the XML schema for us.

If the class looks like the following

public class Person
{
   public string FirstName { get; set; }
   public string LastName { get; set; }
   public int Age { get; set; }
}
[code]

<em>Note: Assuming the class is compiled into an assembly call DomainObjects.dll</em>

Then running xsd.exe with the following command line

[code language="xml"]
xsd.exe DomainObjects.dll /type:Person

will then generate the following xml schema

<?xml version="1.0" encoding="utf-8"?>
<xs:schema elementFormDefault="qualified" xmlns:xs="http://www.w3.org/2001/XMLSchema">
  <xs:element name="Person" nillable="true" type="Person" />
  <xs:complexType name="Person">
    <xs:sequence>
      <xs:element minOccurs="0" maxOccurs="1" name="FirstName" type="xs:string" />
      <xs:element minOccurs="0" maxOccurs="1" name="LastName" type="xs:string" />
      <xs:element minOccurs="1" maxOccurs="1" name="Age" type="xs:int" />
    </xs:sequence>
  </xs:complexType>
</xs:schema>

You’ll notice this is slightly different from the code generated from the person.xml file.