Category Archives: Programming

Operator overloading in F#

More adventures with F#…

Operator overloading is documented perfectly well in Operator Overloading (F#) but to summarize I’ve created this post…

Operators in F# can be overloaded at the class, record type or global level.

Overloading an operator on a class or record type

Let’s look at the syntax for overloading an operator on a class or record type.

Note: Reproduced from the Operator Overloading (F#) page

static member (operator-symbols) (parameter-list> =
    method-body

Overloading an operator which is globally accessible

Let’s look at the syntax for overloading an operator at a global level.

Note: Reproduced from the Operator Overloading (F#) post

let [inline] (operator-symbols) parameter-list = 
    function-body

More…

You can overload the “standard” operators but you can also create your own operators. For example FAKE creates it’s own global operator to combine two file paths using the @@ operator

The below is taken from the FAKE source code available in github

let inline (@@) path1 path2 = combinePaths path1 path2

As it’s pretty obvious, this is a global level overload of a newly created operator, the @@ which takes two parameters.

Where an operator may be used for binary or unary (infix or prefix), such as + or – (+., -., &, &&, %, and %% may also be used for infix or prefix operators) we need to prefix the operator with a tilde ~, hence to overload the + and use as a unary operator we’d write something like

let inline (~+) x = [ x ]

// in use we'd have

let v = + "hello"

Whereas using the + as a binary operator we’d write something like

let inline (+) x y = [ x, y ]

// in use we'd have

let v = "hello" + "world"

If you’re coming from C# to F# you’ll have already noticed the “funky” set of operators that F# supports. As already shown with the @@ operator sample (taken from FAKE) you can combine operators to produce all sorts of new operators, for example

let (<==>) x  y = [ x, y]

Describing my tables, views, stored procs etc.

Having become a little too reliant (at times) on great GUI tools for interacting with my databases, I had to remind myself it’s pretty easy to use code to do this, sure good old Aqua Studio does it with CTRL+D on a data object in the editor, or both Oracle SQL Developer and the MS SQL Server Management Studio allow us to easily drill down the item in the data object tree, but still. Here it is in code…

Tables and views

For Oracle

desc TableName

For SQL Server

exec sp_columns TableName

Stored Procs

For Oracle

desc StoredProceName

For SQL Server

exec sp_help StoredProceName

In fact sp_help can be used for Stored Procs and Tables/Views.

Entity Framework & AutoMapper with navigational properties

I’ve got a webservice which uses EF to query SQL Server for data. The POCO’s for the three tables we’re interested in are listed below:

public class Plant
{
   public int Id { get; set; }

   public PlantType PlantType { get; set; }
   public LifeCycle LifeCycle { get; set; }

  // other properties
}

public class PlantType
{
   public int Id { get; set; }
   public string Name { get; set; }
}

public class LifeCycle
{
   public int Id { get; set; }
   public string Name { get; set; }
}

The issue is that when a new plant is added (or updated for that matter) using the AddPlant (or UpdatePlant) method we need to ensure EF references the LifeCycle and PlantType within its context. i.e. if we try to simply call something like

context.Plants.Add(newPlant);

then (and even though the LifeCycle and PlantType have an existing Id in the database) EF appears to create new PlantTypes and LifeCycles. Thus giving us multiple instances of the same LifeCycle or PlantType name. For the update method I’ve been using AutoMapper to map all the properties, which works well except for the navigational properties. The problem with EF occurs.

I tried several ways to solve this but kept hitting snags. For example we need to get the instance of the PlantType and LifeCycle from the EF context and assign these to the navigational properties to solve the issue of EF adding new PlantTypes etc. I wanted to achieve this in a nice way with AutoMapper. By default the way to create mappings in AutoMapper is with the static Mapper class which suggests the mappings should not change based upon the current data, so what we really need is to create mappings for a specific webservice method call.

To create an instance of the mapper and use it we can do the following (error checking etc. remove)

using (PlantsContext context = new PlantsContext())
{
   var configuration = new ConfigurationStore(
                     new TypeMapFactory(), MapperRegistry.AllMappers());
   var mapper = new MappingEngine(configuration);
   configuration.CreateMap<Plant, Plant>()
         .ForMember(p => p.Type, 
            c => c.MapFrom(pl => context.PlantTypes.FirstOrDefault(pt => pt.Id == pl.Type.Id)))
	 .ForMember(p => p.LifeCycle, 
            c => c.MapFrom(pl => context.LifeCycles.FirstOrDefault(lc => lc.Id == pl.LifeCycle.Id)));

   //... use the mapper.Map to map our data and then context.SaveChanges() 
}

So it can be seen that we can now interact with the instance of the context to find the PlantType and LifeCycle to map and we do not end up trying to create mappings on the static class.

My first attempt as implementing a Computational Expression in F#

So this is my first attempt at implementing a computational expression in F#. I’m not going to go into definitions or the likes as to what computational expressions are as there are far better posts out there on this subject than I could probably write, at this time. I’ll list some in a “further reading” section at the end of the post.

What I’m going to present here is a builder which really just creates a list of names – nothing particularly clever – but it gives a basic idea on getting started with computational expressions (I hope).

So first off we need to create a builder type, mine’s called CurveBuilder

type Items = Items of string list

type CurveBuilder() =
    member this.Yield (()) = Items []

    [<CustomOperation ("create", MaintainsVariableSpace = true)>]
    member this.Create (Items sources, name: string) = 
        Items [ yield! sources
                yield name ]

As this is just a simple demo, the only thing I’m doing is creating a list of strings which you would be correct in thinking that I could do easier using a collection class, but I’m really just interested in seeing the builder and it’s interactions without too much clutter, so go with me on this one…

Before we discuss the builder code, let’s take a look at how we’d use a builder in our application

let builder = CurveBuilder()

let curves = 
   builder {
      create "risky_curve1.gbp"
      create "risky_curve2.usd"
   }

In this code we first create an instance of a CurveBuilder, then when we use the builder to create a list of the curves. The Yield method is first called on the CurveBuilder, returning an empty Items value. Subsequent calls to the create method then call the Create method of the CurveBuilder and, as can be seen, create a new Items value containing the previous Items plus our new curve name.

Simple enough but surely there’s more to this than meets the eye

The example above is a minimal implementation, of course, of a builder. You’ll notice that we have an attribute on one method (the Create method) and not on the Yield method.

So the attribute CustomOperation is used on a member of a builder to create new “query operators”. Basically it extends the builder functionality with new operators named whatever you want to name them.

On the other hand the Yield method is a “standard” builder method. There are several other methods which F# knows about implicitly within a builder class, including Return, Zero, Bind, For, YieldFrom, ReturnFrom, Delay, Combine, Run and more, see Computation Expressions (F#)
for a full list of “built-in workflows”. With these we can obviously produce something more complex than the example presented in this post – which is in essence a glorified “list” builder with snazzy syntax.

Discriminated Union gotcha

One thing I got caught out with from the above code is that I wanted to simply list the curves I’d created but couldn’t figure out how to “deconstruct” the discriminated union

type Items = Items of string list

to a simple string list – at this point I claim ignorance as I’m not using F# as much as I’d like so am still fairly inexperienced with it.

My aim was to produce something like this

for curve in curves do
   printfn "Curve: %s" curve

but this failed with the error The type ‘Items’ is not a type whose values can be enumerated with this syntax, i.e. is not compatible with either seq<_>, IEnumerable<_> or IEnumerable and does not have a GetEnumerator method

I need to in essence disconnect the Items from the string list, to achieve this I found the excellent post Discriminated Unions.

The solution is as follows

let (Items items) = curves

for curve in items do
    printfn "Curve: %s" curve

Note: the let (Items items) = curves

Further Reading
Implementing a builder: Zero and Yield
Implementing a builder: Combine
Implementing a builder: Delay and Run
Implementing a builder: Overloading
Implementing a builder: Adding laziness
Implementing a builder: The rest of the standard methods

Computation Expressions (F#)

The Vending Machine Change problem

I was reading about the “Vending Machine Change” problem the other day. This is a well known problem, which I’m afraid to admit I had never heard of, but I thought it was interesting enough to take a look at now.

Basically the problem that we’re trying to solve is – you need to write the software to calculate the minimum number of coins required to return an amount of change to the user. In other words if a vending machine had the coins 1, 2, 5 & 10, what is the minimum number of coins required to make up the change of 43 pence (or whatever units of currency you want to use).

The coin denominations should be supplied, so the algorithm is not specific to the UK or any other country and the amount in change should also be supplied to the algorithm.

First Look

This is a standard solution to the Vending Machine problem (please note: this code is all over the internet in various languages, I’ve just made a couple of unimpressive changes to it)

static int Calculate(int[] coins, int change)
{
   int[] counts = new int[change + 1];
   counts[0] = 0;

   for(int i = 1; i <= change; i++)
   {
      int count = int.MaxValue;
      foreach(int coin in coins)
      {
         int total = i - coin;
         if(total >= 0 && count > counts[total])
         {
            count = counts[total];
         }
      }
      counts[i] = (count < int.MaxValue) ? count + 1 : int.MaxValue;
   }
   return counts[change];
}

What happens in this code is that we create an array counts which will contains the minimum number of coins for each value between 1 and the amount of change required. We use the 0 index as a counter start value (hence we set counts[0] to 0).

Next we look through each possible change value calculating the number of coins required to make up each value, we use the int.MaxValue to indicate no coins could be found to match the amount of change.

This algorithm assumes an infinite number of coins of each denomination, but this is obviously an unrealistic scenario, so let’s have a look at my attempt to solve this problem for a finite number of each coin.

Let’s try to make things a little more complex

So, as mentioned above, I want to now try to calculate the minimum number of coins to produce the amount of change required, where the number of coins of each denomination is finite.

Let’s start by defining a Coin class

public class Coin
{
   public Coin(int denomition, int count)
   {
      Denomination = denomition;
      Count = count;
   }

   public int Denomination { get; set; }
   public int Count { get; set; }
}

Before I introduce my attempt at a solution, let’s write some tests, I’ve not got great names for many of the tests, they’re mainly for me to try different test scenarios out, but I’m sure you get the idea.

public class VendingMachineTests
{
   private bool Expects(IList<Coin> coins, int denomination, int count)
   {
      Coin c = coins.FirstOrDefault(x => x.Denomination == denomination);
      return c == null ? false : c.Count == count;
   }

   [Fact]
   public void Test1()
   {
      List<Coin> coins = new List<Coin>
      {
         new Coin(10, 100),
         new Coin(5, 100),
         new Coin(2, 100),
         new Coin(1, 100),
      };

      IList<Coin> results = VendingMachine.Calculate(coins, 15);
      Assert.Equal(2, results.Count);
      Assert.True(Expects(results, 10, 1));
      Assert.True(Expects(results, 5, 1));
   }

   [Fact]
   public void Test2()
   {
      List<Coin> coins = new List<Coin>
      {
         new Coin(10, 100),
         new Coin(5, 100),
         new Coin(2, 100),
         new Coin(1, 100),
      };

      IList<Coin> results = VendingMachine.Calculate(coins, 1);
      Assert.Equal(1, results.Count);
      Assert.True(Expects(results, 1, 1));
   }

   [Fact]
   public void Test3()
   {
      List<Coin> coins = new List<Coin>
      {
         new Coin(10, 1),
         new Coin(5, 1),
         new Coin(2, 100),
         new Coin(1, 100),
      };

      IList<Coin> results = VendingMachine.Calculate(coins, 20);
      Assert.Equal(4, results.Count);
      Assert.True(Expects(results, 10, 1));
      Assert.True(Expects(results, 5, 1));
      Assert.True(Expects(results, 2, 2));
      Assert.True(Expects(results, 1, 1));
   }

   [Fact]
   public void NoMatchDueToNoCoins()
   {
      List<Coin> coins = new List<Coin>
      {
         new Coin(10, 0),
         new Coin(5, 0),
         new Coin(2, 0),
         new Coin(1, 0),
      };

      Assert.Null(VendingMachine.Calculate(coins, 20));
   }

   [Fact]
   public void NoMatchDueToNotEnoughCoins()
   {
      List<Coin> coins = new List<Coin>
      {
         new Coin(10, 5),
         new Coin(5, 0),
         new Coin(2, 0),
         new Coin(1, 0),
      };

      Assert.Null(VendingMachine.Calculate(coins, 100));
   }

   [Fact]
   public void Test4()
   {
      List<Coin> coins = new List<Coin>
      {
         new Coin(10, 1),
         new Coin(5, 1),
         new Coin(2, 100),
         new Coin(1, 100),
      };

      IList<Coin> results = VendingMachine.Calculate(coins, 3);
      Assert.Equal(2, results.Count);
      Assert.True(Expects(results, 2, 1));
      Assert.True(Expects(results, 1, 1));
   }

   [Fact]
   public void Test5()
   {
      List<Coin> coins = new List<Coin>
      {
         new Coin(10, 0),
         new Coin(5, 0),
         new Coin(2, 0),
         new Coin(1, 100),
      };

      IList<Coin> results = VendingMachine.Calculate(coins, 34);
      Assert.Equal(1, results.Count);
      Assert.True(Expects(results, 1, 34));
   }

   [Fact]
   public void Test6()
   {
      List<Coin> coins = new List<Coin>
      {
         new Coin(50, 2),
         new Coin(20, 1),
         new Coin(10, 4),
         new Coin(1, int.MaxValue),
      };

      IList<Coin> results = VendingMachine.Calculate(coins, 98);
      Assert.Equal(4, results.Count);
      Assert.True(Expects(results, 50, 1));
      Assert.True(Expects(results, 20, 1));
      Assert.True(Expects(results, 10, 2));
      Assert.True(Expects(results, 1, 8));
   }

   [Fact]
   public void Test7()
   {
      List<Coin> coins = new List<Coin>
      {
         new Coin(50, 1),
         new Coin(20, 2),
         new Coin(15, 1),
         new Coin(10, 1),
         new Coin(1, 8),
      };

      IList<Coin> results = VendingMachine.Calculate(coins, 98);
      Assert.Equal(3, results.Count);
      Assert.True(Expects(results, 50, 1));
      Assert.True(Expects(results, 20, 2));
      Assert.True(Expects(results, 1, 8));
   }
}

Now, here’s the code for my attempt at solving this problem, it uses a “Greedy” algorithm, i.e. trying to find the largest coin(s) first. The code therefore requires that the coins are sorted largest to smallest, I have not put the sort within the Calculate method because it’s recursively called, so it’s down to the calling code to handle this.

There may well be a better way to implement this algorithm, obviously one might prefer to remove the recursion (I may revisit the code when I have time to implement that change).

public static class VendingMachine
{
   public static IList<Coin> Calculate(IList<Coin> coins, int change, int start = 0)
   {
      for (int i = start; i < coins.Count; i++)
      {
         Coin coin = coins[i];
         // no point calculating anything if no coins exist or the 
         // current denomination is too high
         if (coin.Count > 0 && coin.Denomination <= change)
         {
            int remainder = change % coin.Denomination;
            if (remainder < change)
            {
               int howMany = Math.Min(coin.Count, 
                   (change - remainder) / coin.Denomination);

               List<Coin> matches = new List<Coin>();
               matches.Add(new Coin(coin.Denomination, howMany));

               int amount = howMany * coin.Denomination;
               int changeLeft = change - amount;
               if (changeLeft == 0)
               {
                   return matches;
               }

               IList<Coin> subCalc = Calculate(coins, changeLeft, i + 1);
               if (subCalc != null)
               {
                  matches.AddRange(subCalc);
                  return matches;
               }
            }
         }
      }
      return null;
   }
}

Issues with this solution

Whilst this solution does the job pretty well, it’s not perfect. If we had the coins, 50, 20, 11, 10 and 1 the optimal minimum number of coins to find the change for 33 would be 3 * 11 coins. But in the algorithm listed above, the result would be 1 * 20, 1 * 11 and then 2 * 1 coins.

Ofcourse the above example assumed we have 3 of the 11 unit coins in the Vending machine

To solve this we could call the same algorithm but for each call we remove the largest coin type each time. Let’s look at this by first adding the unit test

[Fact]
public void Test8()
{
   List<Coin> coins = new List<Coin>
   {
      new Coin(50, 1),
      new Coin(20, 2),
      new Coin(11, 3),
      new Coin(10, 1),
      new Coin(1, 8),
   };

   IList<Coin> results = VendingMachine.CalculateMinimum(coins, 33);
   Assert.Equal(1, results.Count);
   Assert.True(Expects(results, 11, 3));
}

To just transition the previous solution to the new improved solution, we’ve simply added a new method named CalculateMinimum. The purpose of this method is to try and find the best solution by calculate with all coins, then we reduce the types of available coins by remove the largest coin, then find the best solution, then remove the next largest coin and so on. Here’s some code which might better demonstrate this

public static IList<Coin> CalculateMinimum(IList<Coin> coins, int change)
{
   // used to store the minimum matches
   IList<Coin> minimalMatch = null;
   int minimalCount = -1;

   IList<Coin> subset = coins;
   for (int i = 0; i < coins.Count; i++)
   {
      IList<Coin> matches = Calculate(subset, change);
      if (matches != null)
      {
         int matchCount = matches.Sum(c => c.Count);
         if (minimalMatch == null || matchCount < minimalCount)
         {
            minimalMatch = matches;
            minimalCount = matchCount;
         }
      }
      // reduce the list of possible coins
      subset = subset.Skip(1).ToList();
   }

   return minimalMatch;
}

Performance wise, this (in conjunction with the Calculate method) are sub-optimal if we needed to calculate such minimum numbers of coins many times in a short period of time. A lack of state means we may end up calculating the same change multiple times. Ofcourse we might save such previous calculations and first check whether the algorithm already has a valid solution each time we calculate change if performance was a concern and/or me might calculate a “standard” set of results at start up. For example if we’re selling can’s of drinks for 80 pence we could pre-calculate change based upon likely inputs to the vending machine, i.e. 90p, £1 or £2 coins.

On and we might prefer to remove to use of the Linq code such as Skip and ToList to better utilise memory etc.

References

http://onestopinterviewprep.blogspot.co.uk/2014/03/vending-machine-problem-dynamic.html
http://codercareer.blogspot.co.uk/2011/12/no-26-minimal-number-of-coins-for.html
http://techieme.in/techieme/minimum-number-of-coins/
http://www.careercup.com/question?id=15139685

Adding data to WCF headers

As I’ve covered this subject using WSE3, now to look at adding an SSO token to a WCF header.

Configuration

In your App.config you’ve probably got something like

<system.serviceModel>
   <client configSource="Config\servicemodel-client-config.xml" />
   <bindings configSource="Config\servicemodel-bindings-config.xml" />
</system.serviceModel>

We’re going to add two more configuration files, as follows

<system.serviceModel>
   <client configSource="Config\servicemodel-client-config.xml" />
   <bindings configSource="Config\servicemodel-bindings-config.xml" />
   <!-- Additions -->
   <behaviors configSource="Config\servicemodel-behaviors-config.xml" />
   <extensions configSource="Config\servicemodel-extensions-config.xml" />
</system.serviceModel>

As the names suggest, these file will include the config for behaviors and extensions.

Let’s take a look at the servicemodel-behaviors-config.xml first

<?xml version="1.0" encoding="utf-8" ?>
<behaviors>
   <endpointBehaviors>
      <behavior name="serviceEndpointBehaviour">
         <ssoEndpointBehaviorExtension />
         <!-- insert any other behavior extensions here -->
      </behavior>
   </endpointBehaviors>
</behaviors>

Now we need to actually define what implements ssoEndpointBehaviorExtension. So in the servicemodel-extensions-config.xml configuration, we might have something like the following

<?xml version="1.0" encoding="utf-8" ?>
<extensions>
   <behaviorExtensions>
      <add name="ssoEndpointBehaviorExtension"
            type="SsoService.SsoEndpointBehaviorExtensionElement, SsoService"/>
      <!-- insert other extensions here -->
   </behaviorExtensions>
</extensions>

So as we can see, the ssoEndpointBehaviorExtension behavior is associated with the SsoService assembly and the type SsoEndpointBehaviorExtensionElement.

Implementing the behavior/extension

Unlike WSE3 we do not need to use an attribute to associate the extension with a service call.

Let’s start by looking at the SsoEndpointBehaviorExtensionElement behavior extension.

public class SsoEndpointBehaviorExtensionElement : BehaviorExtensionElement
{
   public override Type BehaviorType
   {
      get { return typeof(SsoEndpointBehavior); }
   }
   
   protected override object CreateBehavior()
   {
      return new SsoEndpointBehavior();
   }
}

The code above relates to the actual extension element, so really just creates the actual behavior when required. Here’s the SsoEndpointBehavior.

public class SsoEndpointBehavior : IEndpointBehavior
{
   public void AddBindingParameters(ServiceEndpoint endpoint, 
                 BindingParameterCollection bindingParameters)
   {
   }

   public void ApplyClientBehavior(ServiceEndpoint endpoint, 
                 ClientRuntime clientRuntime)
   {
      clientRuntime.MessageInspectors.Add(new SsoMessageInspector());
   }

   public void ApplyDispatchBehavior(ServiceEndpoint endpoint, 
                 EndpointDispatcher endpointDispatcher)
   {
   }

   public void Validate(ServiceEndpoint endpoint)
   {
   }
}

This code simply adds the inspector to the message inspector. Finally let’s look at the code that actual intercepts the send requests to add the SSO token to the header.

public class SsoMessageInspector : IClientMessageInspector
{
   public object BeforeSendRequest(ref Message request, IClientChannel channel)
   {
      request.Headers.Add(MessageHeader.CreateHeader("ssoToken", 
              String.Empty, 
              SsoManager.TokenString);
      return null;
   }

   public void AfterReceiveReply(ref Message reply, object correlationState)
   {
   }
}

In the above code, we create an addition to the message header, with the name “ssoToken” followed by any namespace and then the value we wish to store with the header item. In this case our SSO token.

Creating a custom panel using WPF

The Grid, StackPanel, WrapPanel and DockPanel are used to layout controls in WPF. All four are derived from the WPF Panel class. So if we want to create our own “custom panel” we obviously use the Panel as our starting point.

So to start with, we need to create a subclass of the Panel class in WPF. We then need to override both the MeasureOverride and ArrangeOverride methods.

public class MyCustomPanel : Panel
{
   protected override Size MeasureOverride(Size availableSize)
   {
      return base.MeasureOverride(availableSize);
   }

   protected override Size ArrangeOverride(Size finalSize)
   {
      return base.ArrangeOverride(finalSize);
   }
}

WPF implements a two pass layout system to both determine the sizes and positions of child elements within the panel.

So the first phase of this process is to measure the child items and find what their desired size is, given the available size.

It’s important to note that we need to call the child elements Measure method before we can interact with it’s DesiredSize property. For example

protected override Size MeasureOverride(Size availableSize)
{
   Size size = new Size(0, 0);

   foreach (UIElement child in Children)
   {
      child.Measure(availableSize);
      resultSize.Width = Math.Max(size.Width, child.DesiredSize.Width);
      resultSize.Height = Math.Max(size.Height, child.DesiredSize.Height);
   }

   size.Width = double.IsPositiveInfinity(availableSize.Width) ?
      size.Width : availableSize.Width;

   size.Height = double.IsPositiveInfinity(availableSize.Height) ? 
      size.Height : availableSize.Height;

   return size;
}

Note: We don’t want to return a infinite value from the available width/height, instead we’ll return 0

The next phase in this process is to handle the arrangement of the children using ArrangeOverride. For example

protected override Size ArrangeOverride(Size finalSize)
{
   foreach (UIElement child in Children)
   {
      child.Arrange(new Rect(0, 0, child.DesiredSize.Width, child.DesiredSize.Height));
   }
   return finalSize;
}

In the above, minimal code, we’re simply getting each child element’s desired size and arranging the child at point 0, 0 and giving the child it’s desired width and height. So nothing exciting there. However we could arrange the children in other, more interesting ways at this point, such as stacking them with an offset like a deck of cards or largest to smallest (or vice versa) or maybe recreate an existing layout but use transformation to animate their arrangement.

Ninject ActivationStrategy

The NInject ActivationStrategy allows us to create code which will be executed automatically by Ninject during activation and/or deactivation of an object.

So let’s say when we create an object we want it to be created in a two-phase process, i.e. after creation we want to initialize the object. In such a situation we might define an initialization interface, such as

public interface IObjectInitializer
{
   void Initialize();
}

We might have an object which looks like the following

public class MyObject : IObjectInitializer
{
   public MyObject()
   {
      Debug.WriteLine("Constructor");            
   }

   public void Initialize()
   {
      Debug.WriteLine("Initialized");
   }
}

Now when we want to create an instance of MyObject via NInject we obviously need to setup the relevant binding and get an instance of MyObject from the container, thus

StandardKernel kernel = new StandardKernel();

kernel.Bind<MyObject>().To<MyObject>();

MyObject obj = kernel.Get<MyObject>();
obj.Initialize();

In the above code we’ll get an instance of MyObject and then call the Initialize method and this may be a pattern we repeat often, hence it’d be much better if NInject could handle this for us.

To achieve this we can add an ActivationStrategy to NInject as follows

kernel.Components.Add<IActivationStrategy, MyInitializationStrategy>();

This will obviously need to be set-up prior to any instantiation of objects.

Now let’s look at the MyInitializationStrategy object

 public class MyInitializationStrategy : ActivationStrategy
{
   public override void Activate(IContext context, InstanceReference reference)
   {
      reference.IfInstanceIs<IObjectInitializer>(x => x.Initialize());
   }
}

In actual fact, the people behind NInject have already catered for a two-phase creation interface by supplying (and ofcourse adding the Components collection) an interface named InitializableStrategy which does exactly what MyInitializationStrategy does. They also use the same ActivationStrategy mechanism for several other strategies which are used to handle property injection, method injection and more.

Another strategy that we can use in our own objects is StartableStrategy which handles objects which implement the IStartable interface. This supports both a Start and a Stop method on an object as part of the activation and deactivation.

We can also implement code to be executed upon activation/deactivation via the fluent binding interface, for example

kernel.Bind<MyObject>().
   To<MyObject>().
   OnActivation(x => x.Initialize()).
   OnDeactivation(_ => Debug.WriteLine("Deactivation"));

Therefore in this instance we need not create the activation strategy for our activation/deactivation code but instead uses the OnActivation and/or OnDeactivation methods.

Note: Remember if your object supports IInitializable and you also duplicate the calls within the OnActivation/OnDeactivation methods, your code will be called twice

Introduction to using Pex with Microsoft Code Digger

This post is specific to the Code Digger Add-In, which can be used with Visual Studio 2012 and 2013.

Requirements

This will appear in Tools | Extensions and Updates and ofcourse can be downloaded via this dialog.

What is Pex ?

So Pex is a tool for automatically generating test suites. Pex will generate input-output values for your methods by analysing the flow etc. and arguments required by the method.

What is Code Digger ?

Code Digger supplies an add-in for Visual Studio which allows us to select a method and generate input/outputs using Plex and display the results within Visual Studio.

Let’s use Code Digger

Enough talk, let’s write some code and try it out.

Create a new solution, I’m going to create a “standard” class library project. Older versions of Code Digger only worked with PCL’s but now (I’m using 0.95.4) you can go to Tools | Options in Visual Studio, select Pex’s General option and change DisableCodeDiggerPortableClassLibraryRestriction to True (if it’s not already set to this) and run Pex against non-PCL code.

Let’s start with a very simple class and a few methods

public static class Statistics
{
   public static double Mean(double[] values)
   {
      return values.Average();
   }

   public static double Median(double[] values)
   {
      Array.Sort(values);

      int mid = values.Length / 2;
      return (values.Length % 2 == 0) ?
         (values[mid - 1] + values[mid]) / 2 :
         values[mid];
   }

   public static double[] Mode(double[] values)
   {
      var grouped = values.GroupBy(v => v).OrderBy(g => g.Count());
      int max = grouped.Max(g => g.Count());
			
      return (max <= 1) ?
         new double[0] :
         grouped.Where(g => g.Count() == max).Select(g => g.Key).ToArray();
      }
   }
}

Now you may have noticed we do not check for the “values” array being null or empty. This is on purpose, to demonstrate Pex detecting possible failures.

Now, we’ll use the Code Digger add-in.

Right mouse click on a method, let’s take the Mean method to begin with, and select Generate Inputs / Outputs Table. Pex will run and create a list of inputs and outputs. In my code for Mean, I get two failures. Pex has executed my method with a null input and an empty array, both cases are not handled (as mentioned previously) by my Mean code.

If you now try the other methods you should see more similar failures but hopefully more successes with more input values.

Unfortunately (at the time of writing at least) there doesn’t appear to be an option in Code Digger to generate either unit tests automatically or save the inputs for my own unit tests. So for now you’ll have to manually write your tests with the failing inputs and implement code to make those work.

Note: I did find at one time the Generate Inputs / Outputs Table menu option missing, I disable and re-enabled the Code Digger Add-In and restarted Visual Studio and it reappeared.

Debugging a release build of a .NET application

What’s a Release Build compared to a Debug build

Release builds of a .NET application (by default) add optimizations, remove any debug code from the build, i.e. anything inside #if DEBUG is remove as well as Trace. and Debug. calls being removed. You also have reduced debug information. However you will still have .PDB files…

PDB files are generated by the compiler if a project’s properties allow for .PDB file to be generated. Simply check the project properties, select the Build tab and the Advanced… button. You’ll see Debug Info, which can be set to full, pdb-only or none. Obviously none will not produce any .PDB files.

At this point, I do not know the differences between pdb-only and full, if I find out I’ll amend this post, but out of the box, Release builds used pdb-only whilst Debug use full.

So what are .PDB files ?

Simply put – PDB files contain symbol information which allows us to map debug information to source files when we attach a debugger and step through the code.

Debugging a Release Build

It’s often the case that we’ll create a deployment of a Release build without the PDB files, this may be due to a desire to reduce the deployment foot print or some other reason. If you cannot or do not wish to deploy the PDB’s with an application then we should store them for a specific version of a release.

Before attaching our debugger (Visual Studio) we ned to add the PDB file locations to Visual Studio. So select the Debug menu, then Options and Settings. From here select Debugging | Symbols from the tree view on the left of the Options dialog. Click on the add folder button and type in (or paste) the folder name for the symbols for the specific Release build.

Now attach Visual Studio using Debug | Attach Process and the symbols will get loaded for the build and you can now step through the source code.

Let’s look at a real example

An application I work on deploys over the network and we do not include PDB files with it so we can reduce the size of the deployment. If we find a bug only repeatable “production” we cannot step through the source code related to the build without both a version of the code related to the release and without the PDB files for that release.

What we do is, when our continuous integration server runs, it builds a specific version of the application as a release build. We embed the source repository revision into the EXE version number. This allows us to easily check out the source related to that build if need be.

During the build process, we the copy the release build to a deployment folder, again using the source code revision in the folder name. We (as already mentioned) remove the PDB files (and Tests and other such files are also obviously removed). However we don’t just throw away the PDB’s, we instead copy them to a folder similarly named to the release build but with the name Symbols within the folder name (and ofcourse with the same version number). The PDB’s are all copied to this folder and now accessible if we need to debug a release build.

Now if the Release (or a production) build is executed and an error occurs or we just need to step through code for some other reason, we can get the specific source for the deployed version, direct Visual Studio to the PDB files for that build and now step through our code.

So don’t just delete your PDB’s store them in case you need to use them in the future.

Okay, how do we use the symbol/PDB files

So, in Visual Studio (if you’re using that to debug/step through your code). Obviously open your project with the correct source for your release build.

In the Tools | Options dialog, select the Debugging parent node in the dialog and then select Symbols or ofcourse just type Symbols into the search text box in Visual Studio 2013.

Now press the folder button and type in the location of your PDB files folder. Note that this option doesn’t have a folder browse option so you’ll need to type (or copy and paste) the folder name yourself.

Ensure the folder is checked so that Visual Studio will load the symbols.

Now attach the debugger to your release build and Visual Studio will (as mentioned) locate the correct symbols and attach them and then allow you to step through your source.

See Specify Symbol (.pdb) and Source Files in the Visual Studio Debugger for more information and some screen shots of the process just described.