Monthly Archives: June 2014

Reading and/or writing xsd files

I wanted to look at reading an xsd (XML schema) file and generate C# source from it in a similar way to xsd.exe.

As XML schema is itself XML I first looked at writing the code using an XmlReader, but whilst this might be an efficient mechanism for reading the file, it’s a bit of a pain to write the code to process the elements and attributes. So what’s the alternative ?

Well there is actually a much simpler class that we can use named XmlSchema which, admittedly, will read the xsd into memory, but I’m not currently looking at performance being an issue.

Note: I’m going to deal with XmlSchema as a reader but there’s a good example of using it to write an XML schema at XmlSchema Class

Here’s a quick example of reading a stream (which contains an XML scherma)

using (XmlReader reader = new XmlTextReader(stream))
{
   XmlSchema schema = XmlSchema.Read(reader, (sender, args) =>
   {
      Console.WriteLine(args.Message);
   });
   // process the schema items etc. here
}

The Console.WriteLine(args.Message) code is within a ValidationEventHandler delegate which is called when syntax errors are detected.

Once we successfully get an XmlSchema we can interact with it’s Items, for example here some code which is intended loop through all complex types and then process the elements and attributes within each complex type

foreach (var item in schema.Items)
{
   XmlSchemaComplexType complexType = item as XmlSchemaComplexType;
   if (complexType != null)
   {
      XmlSchemaSequence sequence = complexType.Particle as XmlSchemaSequence;
      if (sequence != null)
      {
         foreach (var seqItem in sequence.Items)
         {
            XmlSchemaElement element = seqItem as XmlSchemaElement;
            if (element != null)
            {
               // process elements
            }
         }		
      }
      foreach (var attributeItem in complexType.Attributes)
      {
         XmlSchemaAttribute attribute = attributeItem as XmlSchemaAttribute;
         if (attribute != null)
         {
            // process attributes
         }
      }
   }
}

Creating a custom panel using WPF

The Grid, StackPanel, WrapPanel and DockPanel are used to layout controls in WPF. All four are derived from the WPF Panel class. So if we want to create our own “custom panel” we obviously use the Panel as our starting point.

So to start with, we need to create a subclass of the Panel class in WPF. We then need to override both the MeasureOverride and ArrangeOverride methods.

public class MyCustomPanel : Panel
{
   protected override Size MeasureOverride(Size availableSize)
   {
      return base.MeasureOverride(availableSize);
   }

   protected override Size ArrangeOverride(Size finalSize)
   {
      return base.ArrangeOverride(finalSize);
   }
}

WPF implements a two pass layout system to both determine the sizes and positions of child elements within the panel.

So the first phase of this process is to measure the child items and find what their desired size is, given the available size.

It’s important to note that we need to call the child elements Measure method before we can interact with it’s DesiredSize property. For example

protected override Size MeasureOverride(Size availableSize)
{
   Size size = new Size(0, 0);

   foreach (UIElement child in Children)
   {
      child.Measure(availableSize);
      resultSize.Width = Math.Max(size.Width, child.DesiredSize.Width);
      resultSize.Height = Math.Max(size.Height, child.DesiredSize.Height);
   }

   size.Width = double.IsPositiveInfinity(availableSize.Width) ?
      size.Width : availableSize.Width;

   size.Height = double.IsPositiveInfinity(availableSize.Height) ? 
      size.Height : availableSize.Height;

   return size;
}

Note: We don’t want to return a infinite value from the available width/height, instead we’ll return 0

The next phase in this process is to handle the arrangement of the children using ArrangeOverride. For example

protected override Size ArrangeOverride(Size finalSize)
{
   foreach (UIElement child in Children)
   {
      child.Arrange(new Rect(0, 0, child.DesiredSize.Width, child.DesiredSize.Height));
   }
   return finalSize;
}

In the above, minimal code, we’re simply getting each child element’s desired size and arranging the child at point 0, 0 and giving the child it’s desired width and height. So nothing exciting there. However we could arrange the children in other, more interesting ways at this point, such as stacking them with an offset like a deck of cards or largest to smallest (or vice versa) or maybe recreate an existing layout but use transformation to animate their arrangement.

Ninject ActivationStrategy

The NInject ActivationStrategy allows us to create code which will be executed automatically by Ninject during activation and/or deactivation of an object.

So let’s say when we create an object we want it to be created in a two-phase process, i.e. after creation we want to initialize the object. In such a situation we might define an initialization interface, such as

public interface IObjectInitializer
{
   void Initialize();
}

We might have an object which looks like the following

public class MyObject : IObjectInitializer
{
   public MyObject()
   {
      Debug.WriteLine("Constructor");            
   }

   public void Initialize()
   {
      Debug.WriteLine("Initialized");
   }
}

Now when we want to create an instance of MyObject via NInject we obviously need to setup the relevant binding and get an instance of MyObject from the container, thus

StandardKernel kernel = new StandardKernel();

kernel.Bind<MyObject>().To<MyObject>();

MyObject obj = kernel.Get<MyObject>();
obj.Initialize();

In the above code we’ll get an instance of MyObject and then call the Initialize method and this may be a pattern we repeat often, hence it’d be much better if NInject could handle this for us.

To achieve this we can add an ActivationStrategy to NInject as follows

kernel.Components.Add<IActivationStrategy, MyInitializationStrategy>();

This will obviously need to be set-up prior to any instantiation of objects.

Now let’s look at the MyInitializationStrategy object

 public class MyInitializationStrategy : ActivationStrategy
{
   public override void Activate(IContext context, InstanceReference reference)
   {
      reference.IfInstanceIs<IObjectInitializer>(x => x.Initialize());
   }
}

In actual fact, the people behind NInject have already catered for a two-phase creation interface by supplying (and ofcourse adding the Components collection) an interface named InitializableStrategy which does exactly what MyInitializationStrategy does. They also use the same ActivationStrategy mechanism for several other strategies which are used to handle property injection, method injection and more.

Another strategy that we can use in our own objects is StartableStrategy which handles objects which implement the IStartable interface. This supports both a Start and a Stop method on an object as part of the activation and deactivation.

We can also implement code to be executed upon activation/deactivation via the fluent binding interface, for example

kernel.Bind<MyObject>().
   To<MyObject>().
   OnActivation(x => x.Initialize()).
   OnDeactivation(_ => Debug.WriteLine("Deactivation"));

Therefore in this instance we need not create the activation strategy for our activation/deactivation code but instead uses the OnActivation and/or OnDeactivation methods.

Note: Remember if your object supports IInitializable and you also duplicate the calls within the OnActivation/OnDeactivation methods, your code will be called twice

Composing a Prism UI using regions

Monolithic application are (or should be) a thing of the past. We want to create applications which are composable from various parts, preferably with a loose coupling to allow them to be added to or reconfigured with minimal effort.

There are various composable libraries for WPF, for this post I’m going to concentrate on Prism. Prism uses regions to allow us to partition your application by creating areas within a view for each UI element. These areas are known as regions.

Assuming we have a minimal Prism application as per my post Initial steps to setup a Prism application, then let’s begin by creating a “MainRegion” a region/view which takes up the whole of the Shell window.

  • In the Shell.xaml, add the name space
    xmlns:cal="http://www.codeplex.com/prism"
    
  • Replace any content you have in the shell with the following
    <ItemsControl cal:RegionManager.RegionName="MainRegion" />
    

    here we’ve created an ItemsControl and given it a region name of “MainRegion”. An ItemsControl allows us to display multiple items, equally we could have used a ContentControl for a single item.

  • We’re going to create a new class library for our view(s), so add a class library project to your solution, mine’s named Module1
  • To keep our views together create a View folder within the project
  • Add a WPF UserControl (mine’s named MyView) to the View folder, mine has a TextBlock within it, thus
    <TextBlock Text="My View" />   
    

    just to give us something to see when the view is loaded.

  • Add a class. I’ve named it Module1Module and add the following code
    public class Module1Module : IModule
    {
       private readonly IRegionViewRegistry regionViewRegistry;
    
       public Module1Module(IRegionViewRegistry registry)
       {
          regionViewRegistry = registry;   
       }
    
       public void Initialize()
       {
          regionViewRegistry.RegisterViewWithRegion("MainRegion", 
                   typeof(Views.MyView));
       }
    }
    

    Here we’re setting up an IModule implementation which associates a view with a region name.

  • Reference the class library project in the shell project

Using Unity

  • Now with our Unity bootstrapper, we need to add the module to the module catalog, as per the following
    protected override void ConfigureModuleCatalog()
    {
       base.ConfigureModuleCatalog();
       ModuleCatalog moduleCatalog = (ModuleCatalog)this.ModuleCatalog;
       moduleCatalog.AddModule(typeof(Module1.Module1Module));
    }
    

Using MEF

  • Now with our MEF bootstrapper, we need to add the module to the module catalog, as per the following
    protected override void ConfigureAggregateCatalog()
    {
       base.ConfigureAggregateCatalog();
       AggregateCatalog.Catalogs.Add(new AssemblyCatalog(GetType().Assembly));
       AggregateCatalog.Catalogs.Add(new AssemblyCatalog(
               typeof(Module1.Module1Module).Assembly));
    }
    
  • In our view, we need to mark the class with the ExportAttribute, thus
    [Export]
    public partial class MyView : UserControl
    {
       public MyView()
       {
          InitializeComponent();
       }
    }
    
  • Now we need to change the module code to the following
    [ModuleExport(typeof(Module1Module), 
       InitializationMode=InitializationMode.WhenAvailable)]
    public class Module1Module : IModule
    {
       private readonly IRegionViewRegistry regionViewRegistry;
    
       [ImportingConstructor]
       public Module1Module(IRegionViewRegistry registry)
       {
          regionViewRegistry = registry;
       }
    
       public void Initialize()
       {
          regionViewRegistry.RegisterViewWithRegion("MainRegion", 
               typeof(Views.MyView));
       }
    }
    

Obviously in this sample we created a single region and embedded a single view, but we can easily create multiple named regions to truly “compose” our application from multiple views.

Introduction to using Pex with Microsoft Code Digger

This post is specific to the Code Digger Add-In, which can be used with Visual Studio 2012 and 2013.

Requirements

This will appear in Tools | Extensions and Updates and ofcourse can be downloaded via this dialog.

What is Pex ?

So Pex is a tool for automatically generating test suites. Pex will generate input-output values for your methods by analysing the flow etc. and arguments required by the method.

What is Code Digger ?

Code Digger supplies an add-in for Visual Studio which allows us to select a method and generate input/outputs using Plex and display the results within Visual Studio.

Let’s use Code Digger

Enough talk, let’s write some code and try it out.

Create a new solution, I’m going to create a “standard” class library project. Older versions of Code Digger only worked with PCL’s but now (I’m using 0.95.4) you can go to Tools | Options in Visual Studio, select Pex’s General option and change DisableCodeDiggerPortableClassLibraryRestriction to True (if it’s not already set to this) and run Pex against non-PCL code.

Let’s start with a very simple class and a few methods

public static class Statistics
{
   public static double Mean(double[] values)
   {
      return values.Average();
   }

   public static double Median(double[] values)
   {
      Array.Sort(values);

      int mid = values.Length / 2;
      return (values.Length % 2 == 0) ?
         (values[mid - 1] + values[mid]) / 2 :
         values[mid];
   }

   public static double[] Mode(double[] values)
   {
      var grouped = values.GroupBy(v => v).OrderBy(g => g.Count());
      int max = grouped.Max(g => g.Count());
			
      return (max <= 1) ?
         new double[0] :
         grouped.Where(g => g.Count() == max).Select(g => g.Key).ToArray();
      }
   }
}

Now you may have noticed we do not check for the “values” array being null or empty. This is on purpose, to demonstrate Pex detecting possible failures.

Now, we’ll use the Code Digger add-in.

Right mouse click on a method, let’s take the Mean method to begin with, and select Generate Inputs / Outputs Table. Pex will run and create a list of inputs and outputs. In my code for Mean, I get two failures. Pex has executed my method with a null input and an empty array, both cases are not handled (as mentioned previously) by my Mean code.

If you now try the other methods you should see more similar failures but hopefully more successes with more input values.

Unfortunately (at the time of writing at least) there doesn’t appear to be an option in Code Digger to generate either unit tests automatically or save the inputs for my own unit tests. So for now you’ll have to manually write your tests with the failing inputs and implement code to make those work.

Note: I did find at one time the Generate Inputs / Outputs Table menu option missing, I disable and re-enabled the Code Digger Add-In and restarted Visual Studio and it reappeared.

Debugging a release build of a .NET application

What’s a Release Build compared to a Debug build

Release builds of a .NET application (by default) add optimizations, remove any debug code from the build, i.e. anything inside #if DEBUG is remove as well as Trace. and Debug. calls being removed. You also have reduced debug information. However you will still have .PDB files…

PDB files are generated by the compiler if a project’s properties allow for .PDB file to be generated. Simply check the project properties, select the Build tab and the Advanced… button. You’ll see Debug Info, which can be set to full, pdb-only or none. Obviously none will not produce any .PDB files.

At this point, I do not know the differences between pdb-only and full, if I find out I’ll amend this post, but out of the box, Release builds used pdb-only whilst Debug use full.

So what are .PDB files ?

Simply put – PDB files contain symbol information which allows us to map debug information to source files when we attach a debugger and step through the code.

Debugging a Release Build

It’s often the case that we’ll create a deployment of a Release build without the PDB files, this may be due to a desire to reduce the deployment foot print or some other reason. If you cannot or do not wish to deploy the PDB’s with an application then we should store them for a specific version of a release.

Before attaching our debugger (Visual Studio) we ned to add the PDB file locations to Visual Studio. So select the Debug menu, then Options and Settings. From here select Debugging | Symbols from the tree view on the left of the Options dialog. Click on the add folder button and type in (or paste) the folder name for the symbols for the specific Release build.

Now attach Visual Studio using Debug | Attach Process and the symbols will get loaded for the build and you can now step through the source code.

Let’s look at a real example

An application I work on deploys over the network and we do not include PDB files with it so we can reduce the size of the deployment. If we find a bug only repeatable “production” we cannot step through the source code related to the build without both a version of the code related to the release and without the PDB files for that release.

What we do is, when our continuous integration server runs, it builds a specific version of the application as a release build. We embed the source repository revision into the EXE version number. This allows us to easily check out the source related to that build if need be.

During the build process, we the copy the release build to a deployment folder, again using the source code revision in the folder name. We (as already mentioned) remove the PDB files (and Tests and other such files are also obviously removed). However we don’t just throw away the PDB’s, we instead copy them to a folder similarly named to the release build but with the name Symbols within the folder name (and ofcourse with the same version number). The PDB’s are all copied to this folder and now accessible if we need to debug a release build.

Now if the Release (or a production) build is executed and an error occurs or we just need to step through code for some other reason, we can get the specific source for the deployed version, direct Visual Studio to the PDB files for that build and now step through our code.

So don’t just delete your PDB’s store them in case you need to use them in the future.

Okay, how do we use the symbol/PDB files

So, in Visual Studio (if you’re using that to debug/step through your code). Obviously open your project with the correct source for your release build.

In the Tools | Options dialog, select the Debugging parent node in the dialog and then select Symbols or ofcourse just type Symbols into the search text box in Visual Studio 2013.

Now press the folder button and type in the location of your PDB files folder. Note that this option doesn’t have a folder browse option so you’ll need to type (or copy and paste) the folder name yourself.

Ensure the folder is checked so that Visual Studio will load the symbols.

Now attach the debugger to your release build and Visual Studio will (as mentioned) locate the correct symbols and attach them and then allow you to step through your source.

See Specify Symbol (.pdb) and Source Files in the Visual Studio Debugger for more information and some screen shots of the process just described.

Downloading a file from URL using basic authentication

I had some code in an application which I work on which uses Excel to open a .csv file from a URL. The problem is that user’s have moved to Excel 2010 (yes we’re a little behind the latest versions) and basic authentication is no longer supported without registry changes (see Office file types fail to open from server).

So, to re-implement this I needed to write some code to handle the file download myself (as we’re no able to change user’s registry settings).

The code is simple enough , but I thought it’d be useful to document it here anyway

WebClient client = new WebClient();
client.Proxy = WebRequest.DefaultWebProxy;
client.Credentials = new NetworkCredential(userName, password);
client.DownloadFile(url, filename);

This code assumes that the url is supplied to this code along with a filename for where to save the downloaded file.

We use a proxy, hence the proxy is supplied, and then we supply the NetworkCredential which will handle basic authentication. Here we need to supply the userName and password, ofcourse with basic authentication these will be passed as plain text over the wire.