Category Archives: Programming

Unit testing and “The current SynchronizationContext may not be used as a TaskScheduler” error

When running unit tests (for example with xUnit) and code that requires a synchronization context, one might get the test failing with the message

The current SynchronizationContext may not be used as a TaskScheduler.

The easiest way to resolve this is to supply your own SynchronizationContext to a unit test class, for example adding a static constructor (for xUnit) or in the SetUp method (in NUnit).

static MyTests()
{
   SynchronizationContext.SetSynchronizationContext(new SynchronizationContext());		
}

Note: xUnit supplies a Synchronization context when using async tests, but when running Reactive Extensions or TPL code it seems we need to supply our own.

WeakReferences in .NET

By default when we create an object in .NET we get, what’s known as, a strong reference to that object (in truth we usually simply refer to this as a reference, and omit the word strong). Only when the object is no longer required, i.e. it’s out of scope or no longer referenced can that object be garbage collected.

So for example

// object o can be garbage collected any time after leaving this method call
public void Run()
{
   var o = new MyObject();

   o.DoSomething();
}

// object o can be garbage collection only after the SomeObject is no longer used, 
//i.e. no longer any strong references to it exist
public class SomeObject
{
   private MyObject o = new MyObject();
}

This is fine in many cases, but the canonical example of WeakReference use is, what if MyObject is a large object and an instance of it is held for a long time. It might be that whilst it’s a large object it’s actually rarely used in which case it would be more efficient from a memory point of view if it’s memory was reclaimed if no strong references to it existed and then recreated when required.

If we’re happy for our object to be garbage collected and regenerated/recreated at a later time then we could instead create them as WeakReferences. Obviously from the examples above the method call is not a good fit for using a WeakReference as (assuming the method exists) this instance of MyObject will be garbage collected after the method exits and assuming no further strong references to the object exist. But let’s see how we might create a WeakReference stored as part of an object that might hang around in memory a while.

public class SomeObject
{
   private WeakReference o = new WeakReference();

   public void Run()
   {
      var myObject = GetMyObject();
      // now use myObject
   }

   private MyObject GetMyObject()
   {
      if(o.Target == null)
      {
         o.Target = new MyObject();
      }
      return o.Target;
   }
}

So in the above example we create a WeakReference, it’s best that we go through a helper method to get the instance of the object held within the weak reference, because we can then check whether the object still exists and if it doesn’t recreate it. Hence the GetMyObject method call.

So in GetMyObject, we check whether the weak reference’s Target property is null. In other words, either the data stored within the WeakReference has not been created or it has been garbage collected and we now need to recreate it. So assuming it’s Target is null we create the theoretically large object and assign it to the Target property. Otherwise we simply return the Target property.

At this point it appears we’re just creating something like the Lazy type. But the key difference is that unlike a Lazy type which creates an object when needed then holds a strong reference, the WeakReference not only creates the object when needed but also allows the garbage collection process to free the memory if it appears to be no longer needed. So obviously don’t store something in the WeakReference that tracks current state in your application unless you are also persisting such data to a more permanent store.

References

https://msdn.microsoft.com/en-us/library/system.weakreference(v=vs.110).aspx
https://msdn.microsoft.com/en-us/library/system.weakreference.target(v=vs.110).aspx
https://msdn.microsoft.com/en-us/library/ee787088%28v=vs.110%29.aspx

Testing Reactive Extension code using the TestScheduler

I often use the RX Throttle method, which is brilliant for scenario’s such as allowing a user to type text into a textbox and then only calling my code when the user pauses typing for an assigned amount of time. Or when scrolling up and down lists and when the user stops selecting items getting more data for the selected item from a webservice or the likes.

But what do we do about testing such interactions ?

Sidetrack

Instead of looking at the Throttle method, I needed to create a Throttle that actually buffers the data sent to it and when the timeout occurs gives the subscriber all the data not just the last piece of data (as Throttle would). So here’s some code I wrote (based heavily upon a StackOverflow post)

This post is meant to be about how we test such code, so let’s look at one of the unit test I have for this

[Fact]
public void ThrottleWithBuffer_EnsureValuesCorrectWhenOnNext()
{
   var scheduler = new TestScheduler();
   var expected = new[] { 'a', 'b', 'c' };
   var actual = new List<List<char>>();

   var subject = new Subject<char>();

   subject.AsObservable().
      ThrottleWithBuffer(TimeSpan.FromMilliseconds(500), scheduler).
      Subscribe(i => actual.Add(new List<char>(i)));

   scheduler.Schedule(TimeSpan.FromMilliseconds(100), () => subject.OnNext('a'));
   scheduler.Schedule(TimeSpan.FromMilliseconds(200), () => subject.OnNext('b'));
   scheduler.Schedule(TimeSpan.FromMilliseconds(300), () => subject.OnNext('c'));
   scheduler.Start();

   Assert.Equal(expected, actual[0]);
}

Now the key things here are the use of the TestScheduler which allows us the bend time (well not really, but it allows us to simulate the passing of time).

The TestScheduler is available in the Microsoft.Reactive.Testing.dll or from NuGet using Install-Package Rx-Testing

As you can see from the test, we now create the subscription to the observable and following that the scheduler is passed the “simulated” times. The first argument tells it the time an item is scheduled for, i.e. the first item is scheduled at 100ms, the next 200ms and finally 300ms. We then call Start on the scheduler to begin the time simulation.

Just think, if your code relied on something happening after a pause of a second then each test would have to wait one second before it could verify data, making a large test suit a lot slower to run. Using the TestScheduler we simply simulate a second passing.

Accessing XAML resources in code

I have a bunch of brushes, colours etc. within the Generic.xaml file inside a ResourceDictionary and I needed to get at the resources from code.

So let’s assume our XAML looks like this

<ResourceDictionary 
   xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
   xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
    <SolidColorBrush 
       x:Key="DisabledBackgroundBrush" 
       Color="#FFF0F0F0" />
    <SolidColorBrush 
       x:Key="DisabledForegroundTextBrush" 
       Color="DarkGray" />
</ResourceDictionary>

Now to access these resource from code I’m going to use the following class/code

public static class GenericResources
{
   private static readonly ResourceDictionary genericResourceDictionary;

   static GenericResources()
   {
      var uri = new Uri("/MyAssembly;component/Generic.xaml", UriKind.Relative);
      genericResourceDictionary = (ResourceDictionary)Application.LoadComponent(uri);			
   }

   public static Brush ReferenceBackColor
   {
      get { return (Brush) genericResourceDictionary["DisabledBackgroundBrush"]; }
   }

   public static Brush DisabledForeground
   {
      get { return (Brush)genericResourceDictionary["DisabledForegroundTextBrush"]; }
   }
}

As you can see we create a Uri to the XAML using the format “/<assembly_name>;component/<subfolders>/<xaml_filename>”. Next we get the ResourceDictionary via the LoadComponent method. Now to access the resource dictionary assets via the key’s we simply use the dictionary’s indexer and cast to our expected type.

And that’s all there is to it.

Excluding assemblies from Code Coverage

I regularly run Visual Studio’s code coverage analysis to get an idea of my test coverage, but there’s a lot of code in the project that’s auto generated code and I wanted to turn off the code coverage metrics for these assemblies.

I could look to add the ExcludeFromCodeCoverage attribute as outlined in a previous post “How to exclude code from code coverage” but this is a little laborious when you have many types to add this to and also, in some cases, I do not have control of the code gen tools to apply such attributes after every regeneration of the code – so not exactly ideal.

There is a solution as described in the post Customizing Code Coverage Analysis which allows us to create solution wide file to exclude assemblies from code coverage, I’m going to summarize the steps to create the file here…

Creating the .runsettings file

  • Select your solution in the solution explorer and then right mouse click and select Add | New Item…
  • Select XML File and change the name to your solution name with the .runsettings extension (the name needn’t be the solution name but it’s a good starting point).
  • Now I’ve taken the following from Customizing Code Coverage Analysis but reduced it to the bare minimum, I would suggest you refer to the aforementioned post for a more complete file if you need to use the extra features.
    <?xml version="1.0" encoding="utf-8"?>
    <!-- File name extension must be .runsettings -->
    <RunSettings>
      <DataCollectionRunSettings>
        <DataCollectors>
          <DataCollector friendlyName="Code Coverage" uri="datacollector://Microsoft/CodeCoverage/2.0" assemblyQualifiedName="Microsoft.VisualStudio.Coverage.DynamicCoverageDataCollector, Microsoft.VisualStudio.TraceCollector, Version=11.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a">
            <Configuration>
              <CodeCoverage>
                <!-- Match assembly file paths: -->
                <ModulePaths>
                  <Include>
                    <ModulePath>.*\.dll$</ModulePath>
                    <!--<ModulePath>.*\.exe$</ModulePath>-->
                  </Include>
                  <Exclude>
                    <ModulePath>.*AutoGenerated.dll</ModulePath>
                    <ModulePath>.*Tests.dll</ModulePath>
                  </Exclude>
                </ModulePaths>
    
                <!-- We recommend you do not change the following values: -->
                <UseVerifiableInstrumentation>True</UseVerifiableInstrumentation>
                <AllowLowIntegrityProcesses>True</AllowLowIntegrityProcesses>
                <CollectFromChildProcesses>True</CollectFromChildProcesses>
                <CollectAspDotNet>False</CollectAspDotNet>
    
              </CodeCoverage>
            </Configuration>
          </DataCollector>
        </DataCollectors>
      </DataCollectionRunSettings>
    </RunSettings>
    

    In the above code you’ll note I’ve included all dll’s using the Regular Expression .*\.dll$ but then gone on to exclude a couple of assemblies.

    Note: If I do NOT include the .* in the exclude module paths I found that the analysis still included those files. So just typing the correct name of the assembly on it’s own failed and I needed the .* for this to work.

  • The include happens first and then the exclude takes place. Hence we can use wildcards in the include then exclude certain assemblies explicitly.
  • Before we can actually use the runsettings we need to tell Visual Studio to use the runsettings. So before you test your changes you need to select the Test menu item then Test Settings followed by Select Test Settings File. Select your runsettings file.

    Note: you can tick/untick the selected file via the same menu option to turn on/off the runsettings file being used

Now I can run code coverage across my code and will see only the assemblies that matter to me.

Adding a drop shadow to a WPF popup

When I display a popup, quite often I want it to stand out a little from the control it’s displaying over. The most obvious and simplest way to do this is by displaying it with a drop shadow.

Here’s the code to do this

<Popup AllowsTransparency="True">
   <Grid>
      <Border Margin="0,0,8,8" Background="White" BorderThickness="1">
         <Border.Effect>
            <DropShadowEffect BlurRadius="5" Opacity="0.4"/>
         </Border.Effect>
         <SomeControl/>
      </Border>
   </Grid>
</Popup>

The key things are that you wrap the control(s) you wish to display within the popup inside a Border control and ensure it has a margin. Then set the dropshadow on the border. The next key thing is the popup should AllowTransparency otherwise I’ve found the border area is just black (i.e. no shadow).

Changing a value within an XML document using F#

I needed to simulate some user interaction with our webservices in an automated manner. This post is specific to one small part of this requirement – here we’re going to simply take some XML and change a value within the data and return this altered data.

The code is very simple but demonstrates how to deal with namespaces as well as using XmlDocument and XPath.

Note: I’m using XmlDocument here and XPath for simplicity but obviously this is not the most performant on large documents.

Dynamically extending an object’s properties using TypeDescriptor

The title of this post is slightly misleading in that what I’m really aiming to do is to “appear” to dynamically add properties to an object at runtime, which can then be discover-able by calls to the TypeDescriptor GetProperties method.

What’s the use case ?

The most obvious use for this “technique” is in UI programming whereby you might have an object that either you want to extend to “appear” to have more properties than it really has, or a more likely scenario is where an object has an array of values which you want it to appear as if the items were properties on the object. For example

public class MyObject
{
   public string Name { get; set; }
   public int[] Values { get; set; }
}

We might want this appear as if the Values are actually properties, named P1 to Pn. The is particularly useful when data binding to a Grid control of some sort, so wanting to see the Name and Values as columns in the grid.

Let’s look at how we achieve this…

Eh ? Can I see some code ?

The above use case is an actual one we have on the project I’m working on, but let’s do something a lot simpler to demonstrate the concepts. Let’s instead create an object with three properties and simply dynamically add a fourth property, the class looks like this

public class ThreeColumns
{
   public string One { get; set; }
   public string Two { get; set; }
   public string Three { get; set; }
}

I’ll now demonstrate the code that (when implemented) we’ll use to create an add a fourth property before looking at the actual code that achieves this

// this line would tend to be a class instance variable to ensure the dynamic propetries
// are not GC'd unexpectedly
DynamicPropertyManager<ThreeColumns> propertyManager;

propertyManager = new DynamicPropertyManager<ThreeColumns>();
propertyManager.Properties.Add(
   DynamicPropertyManager<ThreeColumns>.CreateProperty<ThreeColumns, string>(
      "Four",
      t => GetTheValueForFourFromSomewhere(),
      null
));

So the idea is that we’ll aim to create a property manager class which then allows us to add properties to the type, ThreeColumns – but remember these added properties will only be visible by code using the TypeDescriptor to get the list of properties from the object.

If we now created a list of ThreeColumn objects and databind to the DataSource of a grid (such as the Infragistics UltraGrid or XamDataGrid) it would show the four columns for the three real properties and the dynamically created one.

Implementation

Alright we’ve seen the “end game”, let’s now look at how we’re going to create the property manager which will be used to maintain and give access to an implementation of a TypeDescriptionProvider. The property manager looks like this

public class DynamicPropertyManager<TTarget> : IDisposable
{
   private readonly DynamicTypeDescriptionProvider provider;
   private readonly TTarget target;

   public DynamicPropertyManager()
   {
      Type type = typeof (TTarget);

      provider = new DynamicTypeDescriptionProvider(type);
      TypeDescriptor.AddProvider(provider, type);
   }

   public DynamicPropertyManager(TTarget target)
   {
      this.target = target;

      provider = new DynamicTypeDescriptionProvider(typeof(TTarget));
      TypeDescriptor.AddProvider(provider, target);
   }

   public IList<PropertyDescriptor> Properties
   {
      get { return provider.Properties; }
   }

   public void Dispose()
   {
      if (ReferenceEquals(target, null))
      {
         TypeDescriptor.RemoveProvider(provider, typeof(TTarget));
      }
      else
      {
         TypeDescriptor.RemoveProvider(provider, target);
      }
   }

   public static DynamicPropertyDescriptor<TTargetType, TPropertyType> 
      CreateProperty<TTargetType, TPropertyType>(
          string displayName, 
          Func<TTargetType, TPropertyType> getter, 
          Action<TTargetType, TPropertyType> setter, 
          Attribute[] attributes)
   {
      return new DynamicPropertyDescriptor<TTargetType, TPropertyType>(
         displayName, getter, setter, attributes);
   }

   public static DynamicPropertyDescriptor<TTargetType, TPropertyType> 
      CreateProperty<TTargetType, TPropertyType>(
         string displayName, 
         Func<TTargetType, TPropertyType> getHandler, 
         Attribute[] attributes)
   {
      return new DynamicPropertyDescriptor<TTargetType, TPropertyType>(
         displayName, getHandler, (t, p) => { }, attributes);
   }
}

In the above code you’ll notice we can create our dynamic properties on both a type and an instance of a type. Beware, not all UI controls will query for the properties on an instance, but instead will just get those on the type.

So as mentioned the property manager basically manages the lifetime of our TypeDescriptionProvider implementation. So let’s take a look at that code now

public class DynamicTypeDescriptionProvider : TypeDescriptionProvider
{
   private readonly TypeDescriptionProvider provider;
   private readonly List<PropertyDescriptor> properties = new List<PropertyDescriptor>();

   public DynamicTypeDescriptionProvider(Type type)
   {
      provider = TypeDescriptor.GetProvider(type);
   }

   public IList<PropertyDescriptor> Properties
   {
      get { return properties; }
   }

   public override ICustomTypeDescriptor GetTypeDescriptor(Type objectType, object instance)
   {
      return new DynamicCustomTypeDescriptor(
         this, provider.GetTypeDescriptor(objectType, instance));
   }

   private class DynamicCustomTypeDescriptor : CustomTypeDescriptor
   {
      private readonly DynamicTypeDescriptionProvider provider;

      public DynamicCustomTypeDescriptor(DynamicTypeDescriptionProvider provider, 
         ICustomTypeDescriptor descriptor)
            : base(descriptor)
      {
         this.provider = provider;
      }

      public override PropertyDescriptorCollection GetProperties()
      {
         return GetProperties(null);
      }

      public override PropertyDescriptorCollection GetProperties(Attribute[] attributes)
      {
         var properties = new PropertyDescriptorCollection(null);

         foreach (PropertyDescriptor property in base.GetProperties(attributes))
         {
            properties.Add(property);
         }

         foreach (PropertyDescriptor property in provider.Properties)
         {
            properties.Add(property);
         }
         return properties;
      }
   }
}

Note: In the inner class DynamicCustomTypeDescriptor we simply append our dynamic properties to the existing ones when creating the PropertyDescriptorCollection however we could replace/merge properties with the existing object’s. So for example replace/intercept an existing property. Also I’ve made the code as simple as possible, but it’s most likely you’d want to look to cache the properties when the PropertyDescriptorCollection is created to save having to get them every time.

So the purpose of the DynamicTypeDescriptionProvider is to basically build our property list and then intercept and handle calls to the GetProperties methods.

Finally, we want a way to create our new properties (via the CreateProperty methods on the DynamicPropertyManager, so now we need to implement our property descriptors

public class DynamicPropertyDescriptor<TTarget, TProperty> : PropertyDescriptor
{
   private readonly Func<TTarget, TProperty> getter;
   private readonly Action<TTarget, TProperty> setter;
   private readonly string propertyName;

   public DynamicPropertyDescriptor(
      string propertyName, 
      Func<TTarget, TProperty> getter, 
      Action<TTarget, TProperty> setter, 
      Attribute[] attributes) 
         : base(propertyName, attributes ?? new Attribute[] { })
   {
      this.setter = setter;
      this.getter = getter;
      this.propertyName = propertyName;
   }

   public override bool Equals(object obj)
   {
      var o = obj as DynamicPropertyDescriptor<TTarget, TProperty>;
      return o != null && o.propertyName.Equals(propertyName);
   }

   public override int GetHashCode()
   {
      return propertyName.GetHashCode();
   }

   public override bool CanResetValue(object component)
   {
      return true;
   }

   public override Type ComponentType
   {
      get { return typeof (TTarget); }
   }

   public override object GetValue(object component)
   {
      return getter((TTarget)component);
   }

   public override bool IsReadOnly
   {
      get { return setter == null; }
   }

   public override Type PropertyType
   {
      get { return typeof(TProperty); }
   }

   public override void ResetValue(object component)
   {
   }

   public override void SetValue(object component, object value)
   {
      setter((TTarget) component, (TProperty) value);
   }

   public override bool ShouldSerializeValue(object component)
   {
      return true;
   }
}

Much of this code is just creates default methods for the abstract class PropertyDescriptor, but as you can see the GetValue and SetValue call our interceptors which we registered with the property manager.

That’s basically that. So now anything calling TypeDescriptor.GetProperties will see our new properties (and their attributes) and interact with those properties through our inteceptor methods.

If you recall the code for creating the property manager we can use the following to confirm that, indeed TypeDescriptor.GetProperties see’s our ThreeColumns object as having four properties

static void Main(string[] args)
{
   DynamicPropertyManager<ThreeColumns> propertyManager;

   propertyManager = new DynamicPropertyManager<ThreeColumns>();
   propertyManager.Properties.Add(
      DynamicPropertyManager<ThreeColumns>.CreateProperty<ThreeColumns, string>(
         "Four",
         t => "Four",
         null
      ));

   var p = TypeDescriptor.GetProperties(typeof (ThreeColumns));
   Console.WriteLine(p.Count); // outputs 4 instead of the 3 real properties
}

Mutual recursion in F#

One of the annoyances of F#, well it is when you come from C# (or the likes), is that to use a function or type, the function or type needs to have been declared before it’s used. Obviously this is a problem if a type references another type which itself references the first type – but ofcourse they F# guys have a way to handle this and it’s called mutual type recursion.

Let’s look at an example of what I’m talking about and at the solution

[<Measure>
type g = 
   static member toKilograms value : float<g> = value / 1000.0<g/kg>
and [<Measure> kg = 
   static member toGrams value : float<kg> = value * 1000.0<g/kg>

The above is taken from my previous post on units of measure.

So in the above code we have type g returning a unit of measure type kg and type kg returns a unit of measure type g. Obviously g cannot use kg as it’s not declared before type g, but it can in the above example by using the and keyword. This can be thought of, as telling the compiler to wait until it’s read all the “chained” types in before evaluating them.

The same technique is used for creating recursive functions which might call other functions which might call the previous function, i.e. circular function calls.

Here’s some code taken from Microsoft’s MSDN site (referenced below) as this is a better example than any I’ve come up with to demonstrate this

let rec Even x =
   if x = 0 then true 
   else Odd (x - 1)
and Odd x =
   if x = 1 then true 
   else Even (x - 1)

So in this code we can see that the function Even is marked as rec so it’s recursive, it may call the function Odd which in turn may call function Even. Again we chain these functions together using the and keyword.

References

Mutually Recursive Functions
Mutually Recursive Types

More Units of Measure

In a previous post I looked at the unit of measure feature of F#. Now it’s time to revisit the topic.

So as a refresher, we can define a unit of measure in the following manner

[<Measure>]
type kg

and we use the unit of measure thus

let weight = 75<kg>

Now it’s important to note that the unit of measure feature only exists at compile time. At runtime there’s no concept of the unit of measure, i.e. we cannot use reflection or anything else to see what the unit used was as it no longer exists. However at compile time the code will not build if we try and use a value with a unit of measure on which is not compatible with another unit of measure.

But there’s more we can do with a unit of measure which makes them even more useful. We can supply a unit of measure with static member functions, for example

[<Measure>
type g = 
   static member toKilograms value : float<g> = value / 1000.0<g/kg>
and [<Measure> kg = 
   static member toGrams value : float<kg> = value * 1000.0<g/kg>

In the example above we use a mutually recursive type (using the and syntax). This allows the g type to use the kg type and vice versa.

With this code we can now write

let kilograms = g.toKilograms 123<g>

The result of this will assign the value kilograms with the result of the toKilograms function as well as setting it’s unit of measure to kg.

Whilst assigning a unit of measure to a numerical value is simple enough, to apply it to an existing value requires the use of an F# core function, from the LanguagePrimitives module.

This code creates a value with a unit of measure

[<Measure>] 
type g = 
    static member create(value : float) = LanguagePrimitives.FloatWithMeasure<g> value

So in the above code we might pass in a value and using the FloatWithMeasure function we apply the g unit of measure

let weight = 123
let weightInGrams = g.create weight

The weightInGrams value will now have the g unit of measure applied to it.