Author Archives: purpleblob

Entity Framework & AutoMapper with navigational properties

I’ve got a webservice which uses EF to query SQL Server for data. The POCO’s for the three tables we’re interested in are listed below:

public class Plant
{
   public int Id { get; set; }

   public PlantType PlantType { get; set; }
   public LifeCycle LifeCycle { get; set; }

  // other properties
}

public class PlantType
{
   public int Id { get; set; }
   public string Name { get; set; }
}

public class LifeCycle
{
   public int Id { get; set; }
   public string Name { get; set; }
}

The issue is that when a new plant is added (or updated for that matter) using the AddPlant (or UpdatePlant) method we need to ensure EF references the LifeCycle and PlantType within its context. i.e. if we try to simply call something like

context.Plants.Add(newPlant);

then (and even though the LifeCycle and PlantType have an existing Id in the database) EF appears to create new PlantTypes and LifeCycles. Thus giving us multiple instances of the same LifeCycle or PlantType name. For the update method I’ve been using AutoMapper to map all the properties, which works well except for the navigational properties. The problem with EF occurs.

I tried several ways to solve this but kept hitting snags. For example we need to get the instance of the PlantType and LifeCycle from the EF context and assign these to the navigational properties to solve the issue of EF adding new PlantTypes etc. I wanted to achieve this in a nice way with AutoMapper. By default the way to create mappings in AutoMapper is with the static Mapper class which suggests the mappings should not change based upon the current data, so what we really need is to create mappings for a specific webservice method call.

To create an instance of the mapper and use it we can do the following (error checking etc. remove)

using (PlantsContext context = new PlantsContext())
{
   var configuration = new ConfigurationStore(
                     new TypeMapFactory(), MapperRegistry.AllMappers());
   var mapper = new MappingEngine(configuration);
   configuration.CreateMap<Plant, Plant>()
         .ForMember(p => p.Type, 
            c => c.MapFrom(pl => context.PlantTypes.FirstOrDefault(pt => pt.Id == pl.Type.Id)))
	 .ForMember(p => p.LifeCycle, 
            c => c.MapFrom(pl => context.LifeCycles.FirstOrDefault(lc => lc.Id == pl.LifeCycle.Id)));

   //... use the mapper.Map to map our data and then context.SaveChanges() 
}

So it can be seen that we can now interact with the instance of the context to find the PlantType and LifeCycle to map and we do not end up trying to create mappings on the static class.

My first attempt as implementing a Computational Expression in F#

So this is my first attempt at implementing a computational expression in F#. I’m not going to go into definitions or the likes as to what computational expressions are as there are far better posts out there on this subject than I could probably write, at this time. I’ll list some in a “further reading” section at the end of the post.

What I’m going to present here is a builder which really just creates a list of names – nothing particularly clever – but it gives a basic idea on getting started with computational expressions (I hope).

So first off we need to create a builder type, mine’s called CurveBuilder

type Items = Items of string list

type CurveBuilder() =
    member this.Yield (()) = Items []

    [<CustomOperation ("create", MaintainsVariableSpace = true)>]
    member this.Create (Items sources, name: string) = 
        Items [ yield! sources
                yield name ]

As this is just a simple demo, the only thing I’m doing is creating a list of strings which you would be correct in thinking that I could do easier using a collection class, but I’m really just interested in seeing the builder and it’s interactions without too much clutter, so go with me on this one…

Before we discuss the builder code, let’s take a look at how we’d use a builder in our application

let builder = CurveBuilder()

let curves = 
   builder {
      create "risky_curve1.gbp"
      create "risky_curve2.usd"
   }

In this code we first create an instance of a CurveBuilder, then when we use the builder to create a list of the curves. The Yield method is first called on the CurveBuilder, returning an empty Items value. Subsequent calls to the create method then call the Create method of the CurveBuilder and, as can be seen, create a new Items value containing the previous Items plus our new curve name.

Simple enough but surely there’s more to this than meets the eye

The example above is a minimal implementation, of course, of a builder. You’ll notice that we have an attribute on one method (the Create method) and not on the Yield method.

So the attribute CustomOperation is used on a member of a builder to create new “query operators”. Basically it extends the builder functionality with new operators named whatever you want to name them.

On the other hand the Yield method is a “standard” builder method. There are several other methods which F# knows about implicitly within a builder class, including Return, Zero, Bind, For, YieldFrom, ReturnFrom, Delay, Combine, Run and more, see Computation Expressions (F#)
for a full list of “built-in workflows”. With these we can obviously produce something more complex than the example presented in this post – which is in essence a glorified “list” builder with snazzy syntax.

Discriminated Union gotcha

One thing I got caught out with from the above code is that I wanted to simply list the curves I’d created but couldn’t figure out how to “deconstruct” the discriminated union

type Items = Items of string list

to a simple string list – at this point I claim ignorance as I’m not using F# as much as I’d like so am still fairly inexperienced with it.

My aim was to produce something like this

for curve in curves do
   printfn "Curve: %s" curve

but this failed with the error The type ‘Items’ is not a type whose values can be enumerated with this syntax, i.e. is not compatible with either seq<_>, IEnumerable<_> or IEnumerable and does not have a GetEnumerator method

I need to in essence disconnect the Items from the string list, to achieve this I found the excellent post Discriminated Unions.

The solution is as follows

let (Items items) = curves

for curve in items do
    printfn "Curve: %s" curve

Note: the let (Items items) = curves

Further Reading
Implementing a builder: Zero and Yield
Implementing a builder: Combine
Implementing a builder: Delay and Run
Implementing a builder: Overloading
Implementing a builder: Adding laziness
Implementing a builder: The rest of the standard methods

Computation Expressions (F#)

The Vending Machine Change problem

I was reading about the “Vending Machine Change” problem the other day. This is a well known problem, which I’m afraid to admit I had never heard of, but I thought it was interesting enough to take a look at now.

Basically the problem that we’re trying to solve is – you need to write the software to calculate the minimum number of coins required to return an amount of change to the user. In other words if a vending machine had the coins 1, 2, 5 & 10, what is the minimum number of coins required to make up the change of 43 pence (or whatever units of currency you want to use).

The coin denominations should be supplied, so the algorithm is not specific to the UK or any other country and the amount in change should also be supplied to the algorithm.

First Look

This is a standard solution to the Vending Machine problem (please note: this code is all over the internet in various languages, I’ve just made a couple of unimpressive changes to it)

static int Calculate(int[] coins, int change)
{
   int[] counts = new int[change + 1];
   counts[0] = 0;

   for(int i = 1; i <= change; i++)
   {
      int count = int.MaxValue;
      foreach(int coin in coins)
      {
         int total = i - coin;
         if(total >= 0 && count > counts[total])
         {
            count = counts[total];
         }
      }
      counts[i] = (count < int.MaxValue) ? count + 1 : int.MaxValue;
   }
   return counts[change];
}

What happens in this code is that we create an array counts which will contains the minimum number of coins for each value between 1 and the amount of change required. We use the 0 index as a counter start value (hence we set counts[0] to 0).

Next we look through each possible change value calculating the number of coins required to make up each value, we use the int.MaxValue to indicate no coins could be found to match the amount of change.

This algorithm assumes an infinite number of coins of each denomination, but this is obviously an unrealistic scenario, so let’s have a look at my attempt to solve this problem for a finite number of each coin.

Let’s try to make things a little more complex

So, as mentioned above, I want to now try to calculate the minimum number of coins to produce the amount of change required, where the number of coins of each denomination is finite.

Let’s start by defining a Coin class

public class Coin
{
   public Coin(int denomition, int count)
   {
      Denomination = denomition;
      Count = count;
   }

   public int Denomination { get; set; }
   public int Count { get; set; }
}

Before I introduce my attempt at a solution, let’s write some tests, I’ve not got great names for many of the tests, they’re mainly for me to try different test scenarios out, but I’m sure you get the idea.

public class VendingMachineTests
{
   private bool Expects(IList<Coin> coins, int denomination, int count)
   {
      Coin c = coins.FirstOrDefault(x => x.Denomination == denomination);
      return c == null ? false : c.Count == count;
   }

   [Fact]
   public void Test1()
   {
      List<Coin> coins = new List<Coin>
      {
         new Coin(10, 100),
         new Coin(5, 100),
         new Coin(2, 100),
         new Coin(1, 100),
      };

      IList<Coin> results = VendingMachine.Calculate(coins, 15);
      Assert.Equal(2, results.Count);
      Assert.True(Expects(results, 10, 1));
      Assert.True(Expects(results, 5, 1));
   }

   [Fact]
   public void Test2()
   {
      List<Coin> coins = new List<Coin>
      {
         new Coin(10, 100),
         new Coin(5, 100),
         new Coin(2, 100),
         new Coin(1, 100),
      };

      IList<Coin> results = VendingMachine.Calculate(coins, 1);
      Assert.Equal(1, results.Count);
      Assert.True(Expects(results, 1, 1));
   }

   [Fact]
   public void Test3()
   {
      List<Coin> coins = new List<Coin>
      {
         new Coin(10, 1),
         new Coin(5, 1),
         new Coin(2, 100),
         new Coin(1, 100),
      };

      IList<Coin> results = VendingMachine.Calculate(coins, 20);
      Assert.Equal(4, results.Count);
      Assert.True(Expects(results, 10, 1));
      Assert.True(Expects(results, 5, 1));
      Assert.True(Expects(results, 2, 2));
      Assert.True(Expects(results, 1, 1));
   }

   [Fact]
   public void NoMatchDueToNoCoins()
   {
      List<Coin> coins = new List<Coin>
      {
         new Coin(10, 0),
         new Coin(5, 0),
         new Coin(2, 0),
         new Coin(1, 0),
      };

      Assert.Null(VendingMachine.Calculate(coins, 20));
   }

   [Fact]
   public void NoMatchDueToNotEnoughCoins()
   {
      List<Coin> coins = new List<Coin>
      {
         new Coin(10, 5),
         new Coin(5, 0),
         new Coin(2, 0),
         new Coin(1, 0),
      };

      Assert.Null(VendingMachine.Calculate(coins, 100));
   }

   [Fact]
   public void Test4()
   {
      List<Coin> coins = new List<Coin>
      {
         new Coin(10, 1),
         new Coin(5, 1),
         new Coin(2, 100),
         new Coin(1, 100),
      };

      IList<Coin> results = VendingMachine.Calculate(coins, 3);
      Assert.Equal(2, results.Count);
      Assert.True(Expects(results, 2, 1));
      Assert.True(Expects(results, 1, 1));
   }

   [Fact]
   public void Test5()
   {
      List<Coin> coins = new List<Coin>
      {
         new Coin(10, 0),
         new Coin(5, 0),
         new Coin(2, 0),
         new Coin(1, 100),
      };

      IList<Coin> results = VendingMachine.Calculate(coins, 34);
      Assert.Equal(1, results.Count);
      Assert.True(Expects(results, 1, 34));
   }

   [Fact]
   public void Test6()
   {
      List<Coin> coins = new List<Coin>
      {
         new Coin(50, 2),
         new Coin(20, 1),
         new Coin(10, 4),
         new Coin(1, int.MaxValue),
      };

      IList<Coin> results = VendingMachine.Calculate(coins, 98);
      Assert.Equal(4, results.Count);
      Assert.True(Expects(results, 50, 1));
      Assert.True(Expects(results, 20, 1));
      Assert.True(Expects(results, 10, 2));
      Assert.True(Expects(results, 1, 8));
   }

   [Fact]
   public void Test7()
   {
      List<Coin> coins = new List<Coin>
      {
         new Coin(50, 1),
         new Coin(20, 2),
         new Coin(15, 1),
         new Coin(10, 1),
         new Coin(1, 8),
      };

      IList<Coin> results = VendingMachine.Calculate(coins, 98);
      Assert.Equal(3, results.Count);
      Assert.True(Expects(results, 50, 1));
      Assert.True(Expects(results, 20, 2));
      Assert.True(Expects(results, 1, 8));
   }
}

Now, here’s the code for my attempt at solving this problem, it uses a “Greedy” algorithm, i.e. trying to find the largest coin(s) first. The code therefore requires that the coins are sorted largest to smallest, I have not put the sort within the Calculate method because it’s recursively called, so it’s down to the calling code to handle this.

There may well be a better way to implement this algorithm, obviously one might prefer to remove the recursion (I may revisit the code when I have time to implement that change).

public static class VendingMachine
{
   public static IList<Coin> Calculate(IList<Coin> coins, int change, int start = 0)
   {
      for (int i = start; i < coins.Count; i++)
      {
         Coin coin = coins[i];
         // no point calculating anything if no coins exist or the 
         // current denomination is too high
         if (coin.Count > 0 && coin.Denomination <= change)
         {
            int remainder = change % coin.Denomination;
            if (remainder < change)
            {
               int howMany = Math.Min(coin.Count, 
                   (change - remainder) / coin.Denomination);

               List<Coin> matches = new List<Coin>();
               matches.Add(new Coin(coin.Denomination, howMany));

               int amount = howMany * coin.Denomination;
               int changeLeft = change - amount;
               if (changeLeft == 0)
               {
                   return matches;
               }

               IList<Coin> subCalc = Calculate(coins, changeLeft, i + 1);
               if (subCalc != null)
               {
                  matches.AddRange(subCalc);
                  return matches;
               }
            }
         }
      }
      return null;
   }
}

Issues with this solution

Whilst this solution does the job pretty well, it’s not perfect. If we had the coins, 50, 20, 11, 10 and 1 the optimal minimum number of coins to find the change for 33 would be 3 * 11 coins. But in the algorithm listed above, the result would be 1 * 20, 1 * 11 and then 2 * 1 coins.

Ofcourse the above example assumed we have 3 of the 11 unit coins in the Vending machine

To solve this we could call the same algorithm but for each call we remove the largest coin type each time. Let’s look at this by first adding the unit test

[Fact]
public void Test8()
{
   List<Coin> coins = new List<Coin>
   {
      new Coin(50, 1),
      new Coin(20, 2),
      new Coin(11, 3),
      new Coin(10, 1),
      new Coin(1, 8),
   };

   IList<Coin> results = VendingMachine.CalculateMinimum(coins, 33);
   Assert.Equal(1, results.Count);
   Assert.True(Expects(results, 11, 3));
}

To just transition the previous solution to the new improved solution, we’ve simply added a new method named CalculateMinimum. The purpose of this method is to try and find the best solution by calculate with all coins, then we reduce the types of available coins by remove the largest coin, then find the best solution, then remove the next largest coin and so on. Here’s some code which might better demonstrate this

public static IList<Coin> CalculateMinimum(IList<Coin> coins, int change)
{
   // used to store the minimum matches
   IList<Coin> minimalMatch = null;
   int minimalCount = -1;

   IList<Coin> subset = coins;
   for (int i = 0; i < coins.Count; i++)
   {
      IList<Coin> matches = Calculate(subset, change);
      if (matches != null)
      {
         int matchCount = matches.Sum(c => c.Count);
         if (minimalMatch == null || matchCount < minimalCount)
         {
            minimalMatch = matches;
            minimalCount = matchCount;
         }
      }
      // reduce the list of possible coins
      subset = subset.Skip(1).ToList();
   }

   return minimalMatch;
}

Performance wise, this (in conjunction with the Calculate method) are sub-optimal if we needed to calculate such minimum numbers of coins many times in a short period of time. A lack of state means we may end up calculating the same change multiple times. Ofcourse we might save such previous calculations and first check whether the algorithm already has a valid solution each time we calculate change if performance was a concern and/or me might calculate a “standard” set of results at start up. For example if we’re selling can’s of drinks for 80 pence we could pre-calculate change based upon likely inputs to the vending machine, i.e. 90p, £1 or £2 coins.

On and we might prefer to remove to use of the Linq code such as Skip and ToList to better utilise memory etc.

References

http://onestopinterviewprep.blogspot.co.uk/2014/03/vending-machine-problem-dynamic.html
http://codercareer.blogspot.co.uk/2011/12/no-26-minimal-number-of-coins-for.html
http://techieme.in/techieme/minimum-number-of-coins/
http://www.careercup.com/question?id=15139685

Adding data to WCF headers

As I’ve covered this subject using WSE3, now to look at adding an SSO token to a WCF header.

Configuration

In your App.config you’ve probably got something like

<system.serviceModel>
   <client configSource="Config\servicemodel-client-config.xml" />
   <bindings configSource="Config\servicemodel-bindings-config.xml" />
</system.serviceModel>

We’re going to add two more configuration files, as follows

<system.serviceModel>
   <client configSource="Config\servicemodel-client-config.xml" />
   <bindings configSource="Config\servicemodel-bindings-config.xml" />
   <!-- Additions -->
   <behaviors configSource="Config\servicemodel-behaviors-config.xml" />
   <extensions configSource="Config\servicemodel-extensions-config.xml" />
</system.serviceModel>

As the names suggest, these file will include the config for behaviors and extensions.

Let’s take a look at the servicemodel-behaviors-config.xml first

<?xml version="1.0" encoding="utf-8" ?>
<behaviors>
   <endpointBehaviors>
      <behavior name="serviceEndpointBehaviour">
         <ssoEndpointBehaviorExtension />
         <!-- insert any other behavior extensions here -->
      </behavior>
   </endpointBehaviors>
</behaviors>

Now we need to actually define what implements ssoEndpointBehaviorExtension. So in the servicemodel-extensions-config.xml configuration, we might have something like the following

<?xml version="1.0" encoding="utf-8" ?>
<extensions>
   <behaviorExtensions>
      <add name="ssoEndpointBehaviorExtension"
            type="SsoService.SsoEndpointBehaviorExtensionElement, SsoService"/>
      <!-- insert other extensions here -->
   </behaviorExtensions>
</extensions>

So as we can see, the ssoEndpointBehaviorExtension behavior is associated with the SsoService assembly and the type SsoEndpointBehaviorExtensionElement.

Implementing the behavior/extension

Unlike WSE3 we do not need to use an attribute to associate the extension with a service call.

Let’s start by looking at the SsoEndpointBehaviorExtensionElement behavior extension.

public class SsoEndpointBehaviorExtensionElement : BehaviorExtensionElement
{
   public override Type BehaviorType
   {
      get { return typeof(SsoEndpointBehavior); }
   }
   
   protected override object CreateBehavior()
   {
      return new SsoEndpointBehavior();
   }
}

The code above relates to the actual extension element, so really just creates the actual behavior when required. Here’s the SsoEndpointBehavior.

public class SsoEndpointBehavior : IEndpointBehavior
{
   public void AddBindingParameters(ServiceEndpoint endpoint, 
                 BindingParameterCollection bindingParameters)
   {
   }

   public void ApplyClientBehavior(ServiceEndpoint endpoint, 
                 ClientRuntime clientRuntime)
   {
      clientRuntime.MessageInspectors.Add(new SsoMessageInspector());
   }

   public void ApplyDispatchBehavior(ServiceEndpoint endpoint, 
                 EndpointDispatcher endpointDispatcher)
   {
   }

   public void Validate(ServiceEndpoint endpoint)
   {
   }
}

This code simply adds the inspector to the message inspector. Finally let’s look at the code that actual intercepts the send requests to add the SSO token to the header.

public class SsoMessageInspector : IClientMessageInspector
{
   public object BeforeSendRequest(ref Message request, IClientChannel channel)
   {
      request.Headers.Add(MessageHeader.CreateHeader("ssoToken", 
              String.Empty, 
              SsoManager.TokenString);
      return null;
   }

   public void AfterReceiveReply(ref Message reply, object correlationState)
   {
   }
}

In the above code, we create an addition to the message header, with the name “ssoToken” followed by any namespace and then the value we wish to store with the header item. In this case our SSO token.

Creating a text templating engine Host

I’m in the process of creating a little application to codegen some source code for me from an XML schema (yes I can do this with xsd but I wanted the code to be more configurable). Instead of writing my own template language etc. I decided to try and leverage the T4 templating language.

I could simply write some code that can be called from a T4 template, but I decided it would be nicer if the codegen application simply acted as a host to the T4 template and allowed the template to call code on the host, so here’s what I did…

Running a T4 template from your application

The first thing I needed was to be able to actually run a T4 template. To do this you’ll need to add the following references

  • Microsoft.VisualStudio.TextTemplating.11.0
  • Microsoft.VisualStudio.TextTemplating.Interfaces10.0
  • Microsoft.VisualStudio.TextTemplating.Interfaces.11.0

Obviously these are the versions at the time of writing, things may differ in the future.

Next we need to instantiate the T4 engine, this is achieved by using the Microsoft.VisualStudio.TextTemplating namespace and with the following code

Engine engine = new Engine();
string result = engine.ProcessTemplate(File.ReadAllText("sample.tt"), host);

Note: The host will be supplied by us in the next section and obviously “sample.tt” would be supplied at runtime in the completed version of the code.

So, here we create a Engine and supply the template string and host to the ProcessTemplate method. The result of this call is the processed template.

Creating the host

Our host implementation will need to derive from MarshalByRefObject and implement the ITextTemplatingEngineHost interface.

Note: See Walkthrough: Creating a Custom Text Template Host for more information of creating a custom text template.

What follows is a basic implementation of the ITextTemplatingEngineHost based upon the Microsoft article noted above.

public class TextTemplatingEngineHost : MarshalByRefObject, ITextTemplatingEngineHost
{
   public virtual object GetHostOption(string optionName)
   {
      return (optionName == "CacheAssemblies") ? (object)true : null;
   }

   public virtual bool LoadIncludeText(string requestFileName, 
            out string content, out string location)
   {
      content = location = String.Empty;

      if (File.Exists(requestFileName))
      {
         content = File.ReadAllText(requestFileName);
         return true;
      }
      return false;
   }

   public virtual void LogErrors(CompilerErrorCollection errors)
   {
   }

   public virtual AppDomain ProvideTemplatingAppDomain(string content)
   {
      return AppDomain.CreateDomain(&quot;TemplatingHost AppDomain&quot;);
   }

   public virtual string ResolveAssemblyReference(string assemblyReference)
   {
      if (File.Exists(assemblyReference))
      {
         return assemblyReference;
      }

      string candidate = Path.Combine(Path.GetDirectoryName(TemplateFile), 
            assemblyReference);
      return File.Exists(candidate) ? candidate : String.Empty;
   }

   public virtual Type ResolveDirectiveProcessor(string processorName)
   {
      throw new Exception(&quot;Directive Processor not found&quot;);
   }

   public virtual string ResolveParameterValue(string directiveId, 
            string processorName, string parameterName)
   {
      if (directiveId == null)
      {
         throw new ArgumentNullException(&quot;directiveId&quot;);
      }
      if (processorName == null)
      {
         throw new ArgumentNullException(&quot;processorName&quot;);
      }
      if (parameterName == null)
      {
         throw new ArgumentNullException(&quot;parameterName&quot;);
      }

      return String.Empty;
   }

   public virtual string ResolvePath(string path)
   {
      if (path == null)
      {
         throw new ArgumentNullException(&quot;path&quot;);
      }

      if (File.Exists(path))
      {
         return path;
      }
      string candidate = Path.Combine(Path.GetDirectoryName(TemplateFile), path);
      if (File.Exists(candidate))
      {
         return candidate;
      }
      return path;
   }

   public virtual void SetFileExtension(string extension)
   {
   }

   public virtual void SetOutputEncoding(Encoding encoding, bool fromOutputDirective)
   {
   }

   public virtual IList&lt;string&gt; StandardAssemblyReferences
   {
      // bare minimum, returns the location of the System assembly
      get { return new[] { typeof (String).Assembly.Location }; }
   }

   public virtual IList&lt;string&gt; StandardImports
   {
      get { return new[] { &quot;System&quot; }; }
   }

   public string TemplateFile { get; set; }
}

Now the idea is that we can subclass the TextTemplatingEngineHost to implement a version specific for our needs.

Before we look at a specialization of this for our purpose, let’s look at a T4 template sample for generating our code

Note: mycodegen is both my assembly name and the namespace for my code generator which itself hosts the T4 engine.

<#@ template debug="false" hostspecific="true" language="C#" #>
<#@ assembly name="mycodegen" #>
<#@ import namespace="mycodegen" #>
<#@ output extension=".cs" #>

<#
   ICodeGenerator cg = ((ICodeGenerator)this.Host);
#>

namespace <#= cg.Namespace #>
{
   <# 
   foreach(var c in cg.Classes) 
   {
   #>
   public partial class <#= c.Name #>
   { 
      <#  
      foreach(Property p in c.Properties)
      {
          if(p.IsArray)
          {
      #>
         public <#= p.Type #>[] <#= p.Name #> { get; set; }
      <#
           }
           else
           {
      #>
         public <#= p.Type #> <#= p.Name #> { get; set; }
      <#
           }
      }
      #>
   }
   <#
   }
   #>
}

So in the above code you can see that our host will support an interface named ICodeGenerator (which is declared in the mycodegen assembly and namespace). ICodeGenerator will simply supply the class names and properties for the classes extracted from the XML schema and we’ll use the T4 template to generate the output. By using this template we can easily change how we output our classes and properties, for example xsd creates fields which are not required if we use auto-implemented property syntax, plus we can change the naming convention, property name case and so on an so forth. Whilst we could add code to the partial classes generated by including other files implementing further partial methods etc. if a type no longer exists in a month or two we need to ensure we deleted the manually added code if we want to keep our code clean. Using the T4 template we can auto generate everything we need.

References

Walkthrough: Creating a Custom Text Template Host
Processing Text Templates by using a Custom Host

Adding envelope headers to SOAP calls using WSE 3

This is a little old school as greenfield projects are more likely to use WCF or some other mechanism – but this blog is all about reminding myself how things work. In this case I’m going to look back at how WSE3 can be used to add data to the header of a SOAP envelope for web service calls.

WSE or Web Service Enhancements (from Microsoft) allows us to intercept SOAP messages and in the case of this post, add information to the SOAP message which can then be read by a service.

A likely scenario and one I will describe here is that we might wish to add a security token to the SOAP message which can be used by the service to ensure the user is authenticated.

Let’s get started

If you’ve not already got WSE3 on your machine then let’s get it, by either downloading WSE3 and then referencing the following assembly

Microsoft.Web.Services3

or use NuGet to add the WSE package by using

Install-Package Microsoft.Web.Services3

Configuration

Next we need to create some config. So add/edit your App.config to include the following

<configSections>
   <section name="microsoft.web.services3" 
        type="Microsoft.Web.Services3.Configuration.WebServicesConfiguration, 
        Microsoft.Web.Services3, Version=3.0.0.0, Culture=neutral, 
        PublicKeyToken=31bf3856ad364e35" />
</configSections>

<microsoft.web.services3>
   <policy fileName="wse3ClientPolicyCache.config" />
      <diagnostics>
         <trace enabled="false" 
                input="InputTrace.webinfo" 
                output="OutputTrace.webinfo" />
         <detailedErrors enabled="true" />
   </diagnostics>
</microsoft.web.services3>

Notice we’ve stated the policy filename is wse3ClientPolicyCache.config (obviously this can be named whatever you want). This file will contain the config which denotes what extensions are to be executed during a web service call.

So let’s look at my wse3ClientPolicyCache.config file

<policies xmlns="http://schemas.microsoft.com/wse/2005/06/policy">
  <extensions>
    <extension name="ssoTokenAssertion" 
               type="SsoService.SsoTokenAssertion, SSOService"/>
  </extensions>
  <policy name="SsoPolicy">
    <ssoTokenAssertion />
    <requireActionHeader />
  </policy>
</policies>

Here we’ve added an extension named ssoTokenAssertion which is associated with a type in the standard form (type, assembly). We then create a policy which shows we’re expecting the policy to use to extension ssoTokenAssertion that we’ve just added. The requireActionHeader is documented here.

Changes to the web service code

So we’ve now created all the config to allow WSE to work but we now need to write some code.

We need to do a couple of things to allow our web service code (whether generated via wsdl.exe or hand coded) to work with WSE. Firstly need to derive our service from Microsoft.Web.Services3.WebServicesClientProtocol and secondly we need to adorn the web service with the Microsoft.Web.Services3.Policy attribute.

In our configuration we created a policy named SsoPolicy, this should be the string passed to the Policy attribute. Here’s an example web service

[Microsoft.Web.Services3.Policy("SsoPolicy")]
public partial class UserService : WebServicesClientProtocol
{
   // implementation
}

In the above I’ve removed the wsdl generated code for simplicity.

Implementing our “SsoService.SsoTokenAssertion, SsoService” type

So we’ve got the configuration which states that we’re supplying a type SsoTokenAssertion which will add data to the SOAP envelope header. Firstly we create a SecurityPolicyAssertion derived class.

public class SsoTokenAssertion : SecurityPolicyAssertion
{
   private const string SSO_TOKEN_ASSERTION = "ssoTokenAssertion";

   public override SoapFilter CreateClientInputFilter(FilterCreationContext context)
   {
      return null;
   }

   public override SoapFilter CreateClientOutputFilter(FilterCreationContext context)
   {
      return new SsoClientSendFilter(this);
   }

   public override SoapFilter CreateServiceInputFilter(FilterCreationContext context)
   {
      return null;
   }

   public override SoapFilter CreateServiceOutputFilter(FilterCreationContext context)
   {
      return null;
   }

   public override void ReadXml(XmlReader reader, IDictionary<string, Type> extensions)
   {
      bool isEmpty = reader.IsEmptyElement;
      reader.ReadStartElement(SSO_TOKEN_ASSERTION);
      if (!isEmpty)
      {
         reader.ReadEndElement();
      }
   }

   public override IEnumerable<KeyValuePair<string, Type>> GetExtensions()
   {
      return new[] { new KeyValuePair<string, Type>(SSO_TOKEN_ASSERTION, GetType()) };
   }
}

In the above code, we’re only really doing a couple of things. The first is creating an output filter, this will add our SSO token to the SOAP envelope header on it’s way to the server, the ReadXml really just throws away any token that might be sent to our code – in this case we don’t care about a token in the header but we might for some other application.

public class SsoClientSendFilter : SendSecurityFilter
{
   protected const string SSO_HEADER_ELEMENT = "ssoToken";

   public SsoClientSendFilter(SecurityPolicyAssertion parentAssertion) :
      base(parentAssertion.ServiceActor, true)
   {
   }

   public override void SecureMessage(SoapEnvelope envelope, Security security)
   {
      string ssoTokenString = SsoManager.TokenString;

      if (String.IsNullOrEmpty(ssoTokenString))
      {
          throw new ApplicationException(
 	   "Could not generate an SSO token. Please ensure your user name exists within SSO and the password matches the one expected by SSO.");
      }

      XmlElement ssoTokenElement = envelope.CreateElement(SSO_HEADER_ELEMENT);
      ssoTokenElement.InnerText = ssoTokenString;
      envelope.Header.AppendChild(ssoTokenElement);
   }
}

The above code is probably fairly self-explanatory. The SsoManager class is a singleton which simply authenticates a user and creates a token string which we will attach to the envelope header, which we do by creating an XmlElement and adding it to the header.

Further Reading

Just came across Programming with WSE which has some interesting posts.

Reading and/or writing xsd files

I wanted to look at reading an xsd (XML schema) file and generate C# source from it in a similar way to xsd.exe.

As XML schema is itself XML I first looked at writing the code using an XmlReader, but whilst this might be an efficient mechanism for reading the file, it’s a bit of a pain to write the code to process the elements and attributes. So what’s the alternative ?

Well there is actually a much simpler class that we can use named XmlSchema which, admittedly, will read the xsd into memory, but I’m not currently looking at performance being an issue.

Note: I’m going to deal with XmlSchema as a reader but there’s a good example of using it to write an XML schema at XmlSchema Class

Here’s a quick example of reading a stream (which contains an XML scherma)

using (XmlReader reader = new XmlTextReader(stream))
{
   XmlSchema schema = XmlSchema.Read(reader, (sender, args) =>
   {
      Console.WriteLine(args.Message);
   });
   // process the schema items etc. here
}

The Console.WriteLine(args.Message) code is within a ValidationEventHandler delegate which is called when syntax errors are detected.

Once we successfully get an XmlSchema we can interact with it’s Items, for example here some code which is intended loop through all complex types and then process the elements and attributes within each complex type

foreach (var item in schema.Items)
{
   XmlSchemaComplexType complexType = item as XmlSchemaComplexType;
   if (complexType != null)
   {
      XmlSchemaSequence sequence = complexType.Particle as XmlSchemaSequence;
      if (sequence != null)
      {
         foreach (var seqItem in sequence.Items)
         {
            XmlSchemaElement element = seqItem as XmlSchemaElement;
            if (element != null)
            {
               // process elements
            }
         }		
      }
      foreach (var attributeItem in complexType.Attributes)
      {
         XmlSchemaAttribute attribute = attributeItem as XmlSchemaAttribute;
         if (attribute != null)
         {
            // process attributes
         }
      }
   }
}

Creating a custom panel using WPF

The Grid, StackPanel, WrapPanel and DockPanel are used to layout controls in WPF. All four are derived from the WPF Panel class. So if we want to create our own “custom panel” we obviously use the Panel as our starting point.

So to start with, we need to create a subclass of the Panel class in WPF. We then need to override both the MeasureOverride and ArrangeOverride methods.

public class MyCustomPanel : Panel
{
   protected override Size MeasureOverride(Size availableSize)
   {
      return base.MeasureOverride(availableSize);
   }

   protected override Size ArrangeOverride(Size finalSize)
   {
      return base.ArrangeOverride(finalSize);
   }
}

WPF implements a two pass layout system to both determine the sizes and positions of child elements within the panel.

So the first phase of this process is to measure the child items and find what their desired size is, given the available size.

It’s important to note that we need to call the child elements Measure method before we can interact with it’s DesiredSize property. For example

protected override Size MeasureOverride(Size availableSize)
{
   Size size = new Size(0, 0);

   foreach (UIElement child in Children)
   {
      child.Measure(availableSize);
      resultSize.Width = Math.Max(size.Width, child.DesiredSize.Width);
      resultSize.Height = Math.Max(size.Height, child.DesiredSize.Height);
   }

   size.Width = double.IsPositiveInfinity(availableSize.Width) ?
      size.Width : availableSize.Width;

   size.Height = double.IsPositiveInfinity(availableSize.Height) ? 
      size.Height : availableSize.Height;

   return size;
}

Note: We don’t want to return a infinite value from the available width/height, instead we’ll return 0

The next phase in this process is to handle the arrangement of the children using ArrangeOverride. For example

protected override Size ArrangeOverride(Size finalSize)
{
   foreach (UIElement child in Children)
   {
      child.Arrange(new Rect(0, 0, child.DesiredSize.Width, child.DesiredSize.Height));
   }
   return finalSize;
}

In the above, minimal code, we’re simply getting each child element’s desired size and arranging the child at point 0, 0 and giving the child it’s desired width and height. So nothing exciting there. However we could arrange the children in other, more interesting ways at this point, such as stacking them with an offset like a deck of cards or largest to smallest (or vice versa) or maybe recreate an existing layout but use transformation to animate their arrangement.

Ninject ActivationStrategy

The NInject ActivationStrategy allows us to create code which will be executed automatically by Ninject during activation and/or deactivation of an object.

So let’s say when we create an object we want it to be created in a two-phase process, i.e. after creation we want to initialize the object. In such a situation we might define an initialization interface, such as

public interface IObjectInitializer
{
   void Initialize();
}

We might have an object which looks like the following

public class MyObject : IObjectInitializer
{
   public MyObject()
   {
      Debug.WriteLine("Constructor");            
   }

   public void Initialize()
   {
      Debug.WriteLine("Initialized");
   }
}

Now when we want to create an instance of MyObject via NInject we obviously need to setup the relevant binding and get an instance of MyObject from the container, thus

StandardKernel kernel = new StandardKernel();

kernel.Bind<MyObject>().To<MyObject>();

MyObject obj = kernel.Get<MyObject>();
obj.Initialize();

In the above code we’ll get an instance of MyObject and then call the Initialize method and this may be a pattern we repeat often, hence it’d be much better if NInject could handle this for us.

To achieve this we can add an ActivationStrategy to NInject as follows

kernel.Components.Add<IActivationStrategy, MyInitializationStrategy>();

This will obviously need to be set-up prior to any instantiation of objects.

Now let’s look at the MyInitializationStrategy object

 public class MyInitializationStrategy : ActivationStrategy
{
   public override void Activate(IContext context, InstanceReference reference)
   {
      reference.IfInstanceIs<IObjectInitializer>(x => x.Initialize());
   }
}

In actual fact, the people behind NInject have already catered for a two-phase creation interface by supplying (and ofcourse adding the Components collection) an interface named InitializableStrategy which does exactly what MyInitializationStrategy does. They also use the same ActivationStrategy mechanism for several other strategies which are used to handle property injection, method injection and more.

Another strategy that we can use in our own objects is StartableStrategy which handles objects which implement the IStartable interface. This supports both a Start and a Stop method on an object as part of the activation and deactivation.

We can also implement code to be executed upon activation/deactivation via the fluent binding interface, for example

kernel.Bind<MyObject>().
   To<MyObject>().
   OnActivation(x => x.Initialize()).
   OnDeactivation(_ => Debug.WriteLine("Deactivation"));

Therefore in this instance we need not create the activation strategy for our activation/deactivation code but instead uses the OnActivation and/or OnDeactivation methods.

Note: Remember if your object supports IInitializable and you also duplicate the calls within the OnActivation/OnDeactivation methods, your code will be called twice

Composing a Prism UI using regions

Monolithic application are (or should be) a thing of the past. We want to create applications which are composable from various parts, preferably with a loose coupling to allow them to be added to or reconfigured with minimal effort.

There are various composable libraries for WPF, for this post I’m going to concentrate on Prism. Prism uses regions to allow us to partition your application by creating areas within a view for each UI element. These areas are known as regions.

Assuming we have a minimal Prism application as per my post Initial steps to setup a Prism application, then let’s begin by creating a “MainRegion” a region/view which takes up the whole of the Shell window.

  • In the Shell.xaml, add the name space
    xmlns:cal="http://www.codeplex.com/prism"
    
  • Replace any content you have in the shell with the following
    <ItemsControl cal:RegionManager.RegionName="MainRegion" />
    

    here we’ve created an ItemsControl and given it a region name of “MainRegion”. An ItemsControl allows us to display multiple items, equally we could have used a ContentControl for a single item.

  • We’re going to create a new class library for our view(s), so add a class library project to your solution, mine’s named Module1
  • To keep our views together create a View folder within the project
  • Add a WPF UserControl (mine’s named MyView) to the View folder, mine has a TextBlock within it, thus
    <TextBlock Text="My View" />   
    

    just to give us something to see when the view is loaded.

  • Add a class. I’ve named it Module1Module and add the following code
    public class Module1Module : IModule
    {
       private readonly IRegionViewRegistry regionViewRegistry;
    
       public Module1Module(IRegionViewRegistry registry)
       {
          regionViewRegistry = registry;   
       }
    
       public void Initialize()
       {
          regionViewRegistry.RegisterViewWithRegion("MainRegion", 
                   typeof(Views.MyView));
       }
    }
    

    Here we’re setting up an IModule implementation which associates a view with a region name.

  • Reference the class library project in the shell project

Using Unity

  • Now with our Unity bootstrapper, we need to add the module to the module catalog, as per the following
    protected override void ConfigureModuleCatalog()
    {
       base.ConfigureModuleCatalog();
       ModuleCatalog moduleCatalog = (ModuleCatalog)this.ModuleCatalog;
       moduleCatalog.AddModule(typeof(Module1.Module1Module));
    }
    

Using MEF

  • Now with our MEF bootstrapper, we need to add the module to the module catalog, as per the following
    protected override void ConfigureAggregateCatalog()
    {
       base.ConfigureAggregateCatalog();
       AggregateCatalog.Catalogs.Add(new AssemblyCatalog(GetType().Assembly));
       AggregateCatalog.Catalogs.Add(new AssemblyCatalog(
               typeof(Module1.Module1Module).Assembly));
    }
    
  • In our view, we need to mark the class with the ExportAttribute, thus
    [Export]
    public partial class MyView : UserControl
    {
       public MyView()
       {
          InitializeComponent();
       }
    }
    
  • Now we need to change the module code to the following
    [ModuleExport(typeof(Module1Module), 
       InitializationMode=InitializationMode.WhenAvailable)]
    public class Module1Module : IModule
    {
       private readonly IRegionViewRegistry regionViewRegistry;
    
       [ImportingConstructor]
       public Module1Module(IRegionViewRegistry registry)
       {
          regionViewRegistry = registry;
       }
    
       public void Initialize()
       {
          regionViewRegistry.RegisterViewWithRegion("MainRegion", 
               typeof(Views.MyView));
       }
    }
    

Obviously in this sample we created a single region and embedded a single view, but we can easily create multiple named regions to truly “compose” our application from multiple views.