Category Archives: Programming

Documenting your API’s using XML documentation

I’ve been updating one of my GitHub repositories (my Presentation.Core repo.) with some more documentation, using XML documentation comments, and thought I’d write a refresher post for taking your code from no documentation all the way through to being fully documented.

<summary>

So the /// is used for XML documenting comments in C#, for example

/// <summary>
/// A Task/async aware command object. 
/// </summary>
public class AsyncCommand : CommandCommon
{
}

In the above, the <summary> element (and it’s corresponding end element) within the /// acts as a summary of our class. This class level summary is displayed in Visual Studio Intellisense when declaring a type of AsyncCommand when we create code such as new AsyncCommand() Intellisense will display the summary for the default constructor (if such documentation exists).

The summary tag is the most important tag for documenting our code using the XML documentation as it’ll acts as the primary source of information for Intellisense and within generated help files.

<remarks>

This is an optional documentation <remarks> element which can be used to expand upon the summary or add supplementary documentation to the summary element. For example

/// <summary>
/// A Task/async aware command object. 
/// </summary>
/// <remarks>
/// Automatically handles changes to the built-in, IsBusy flag, so when
/// the command is executing the IsBusy is true and when completed or not 
/// executing IsBusy is false.
/// </remarks>
public class AsyncCommand : CommandCommon
{
}

Summary text is displayed by IntelliSense whereas Remarks are shown in the help file output via tools such as Sandcastle.

<returns>

As you might guess, the <returns> element is used on method return values, for example

/// <summary>
/// Implementation of CanExecute which returns a boolean
/// to allow the calling method to determine whether the
/// Execute method can be called
/// </summary>
/// <param name="parameter">An object is ignored in this implementation</param>
/// <returns>True if the command can be executed, otherwise False</returns>
public override bool CanExecute(object parameter)
{
}

<param>

Continuing with the elements available for a method, the param element is used for method arguments, to allow the documentation to list what each parameter is used for.

/// <summary>
/// Implementation of CanExecute which returns a boolean
/// to allow the calling method to determine whether the
/// Execute method can be called
/// </summary>
/// <param name="parameter">An object is ignored in this implementation</param>
/// <returns>True if the command can be executed, otherwise False</returns>
public override bool CanExecute(object parameter)
{
}

<value>

Like a return but used on a property, the <value> element is used along with a summary to describe what the property represents.

<para>

The <para> element is used within summary, remarks or return elements to add paragraph formatting/structure to your documentation.

<exception>

The <exception> element is used (as the name suggests) to list exceptions a method may throw during it’s execution. For example

/// <summary>
/// Constructor take a comparison function which expects two types of T and
/// returns an integer where a value less than 0 indicates the first item is 
/// before the second, a value of 0 indicates they're equivalent 
/// and a value greater than 0 indicates the first item is after
/// the second item. This constructor also takes a function for the object 
/// hash code method.
/// </summary>
/// <param name="objectComparer">
/// A function to compare two items of type T
/// </param>
/// <exception cref="System.NullReferenceException">
/// Thrown when the objectComparer is null
/// </exception>
public ComparerImpl(
   Func<T, T, int> objectComparer)
{
   _objectComparer = 
      objectComparer ?? 
      throw new NullReferenceException("Comparer cannot be null");
}

IntelliSense will display the list of exceptions

<see>

The <see> element allows us to reference documentation elsewhere within your code, it accepts a cref argument and within this we need to reference our classname.methodname etc. For example

/// <summary>
/// Sets the property value against the property and raises
/// OnPropertyChanging, OnPropertyChanged etc. as required.
/// <see cref="GetProperty{T}"/>
/// </summary>
protected bool SetProperty<T>(
   Func<T> getter, 
   Func<T, T> setter, 
   T value, 
   [CallerMemberName] string propertyName = null)
{
}

Note: In the case of reference another method within the same class we need not declare the class name within the cref, but if referencing another class we should use classname.methodname syntax.

Within IntelliSense and documentation the full method name will be displayed for the see element’s cref.

<seealso>

Usage for <seealso> is as per the see element but generated code will create a see also section and place such references within.

<typeparam>

The <typeparam> element is used to document the generic parameters passed to a class and/or methods, for example

/// <summary>
/// Implements IComparer&lt;T&gt;, IEqualityComparer&lt;T&gt; 
/// and IComparer
/// </summary>
/// <typeparam name="T">The type of being compared</typeparam>
public class ComparerImpl<T> : 
   IComparer<T>, IEqualityComparer<T>, IComparer
{
}

<paramref>

Used within a summary or remarks, the <paramref> is used to refer to a parameter on the method being documented. For example

/// <summary>
/// Gets the current value via a Func via the <paramref name="generateFunc"/>
/// </summary>
/// <typeparam name="T">The property type</typeparam>
/// <param name="generateFunc">The function to create the return value</param>
/// <param name="propertyName">The name of the property</param>
/// <returns>The value of type T</returns>
protected T ReadOnlyProperty<T>(
   Func<T> generateFunc, 
   [CallerMemberName] string propertyName = null)
{
}

In the above, generateFunc is displayed within the IntelliSense summary and highlighted (via an italic font) in generated help files.

<typeparamref>

Like paramref, the <typeparamref> element can be used within summary, remarks etc. to link to a specific generic type, for example

/// <summary>
/// Gets the current property value as type <typeparamref name="T"/>
/// </summary>
protected T GetProperty<T>(
   Func<T> getter, 
   Func<T, T> setter, 
   [CallerMemberName] string propertyName = null)
{
}

As per paramref generated documentation may highlight the name within a help file and it’s also displayed via IntelliSense within the summary.

<list>

The <list> element allows us to define formatted text in a bullet, name or table format within our summary block, for example

/// <summary>
/// Acts as a base class for view models, can be used
/// with backing fields or delegating getter/setter 
/// functionality to another class - useful for situations
/// where the underlying model is used directly
/// <list type="bullet">
/// <item><description>GetProperty</description></item>
/// <item><description>SetProperty</description></item>
/// <item><description>ReadOnlyProperty</description></item>
/// </list>
/// </summary>
public class ViewModelWithModel : ViewModelCommon
{
}

This results in generated documentation showing a bullet list of items, within IntelliSense, this isn’t quite so pretty and results in just the descriptions listed with space delimitation.

<example>

The <example> element allows us to add example code to our documentation, for example

/// <summary>
/// Sets the property value against the property and raises
/// OnPropertyChanging, OnPropertyChanged etc. as required
/// </summary>
/// <example>
/// <code>
/// public string Name
/// {
///    set => SetProperty(value);
///    get => GetProperty();
/// }
/// </code>
/// </example>
protected bool SetProperty<T>(
   T value, 
   [CallerMemberName] string propertyName = null)
{
}

Note: Without the <code> element the example is not formatted as we’d expected. This is results in a generated help file section named examples.

<code>

The <code> element is used within the example element to format code samples. See above.

<c>

The <c> element may be used within a summary or other element and formats the text within it as code.

<permission>

As you probably guessed, we can highlight the permissions expected or required for a method etc. using the <permission> element. For example

/// <summary>
/// Gets the current property value
/// </summary>
/// <permission cref="System.Security.PermissionSet">
/// Unlimited access to this method.
/// </permission>
protected T GetProperty<T>([CallerMemberName] string propertyName = null)
{
}

The above will result in generated documentation with a security section which includes a table of the cref value and the description for the permissions required for the method.

<include>

The <include> element allows us to use documentation from an external XML file.

How do we generate the XML documents?

Once we’ve documented our code using the XML documentation we will need to get Visual Studio to generate the XML files for the documents, basically extracting the comment blocks into these external files.

For each project you wish to generate documentation for (within Visual Studio) select the properties for the project and then the Build tab. In the Output sections check the XML documentation file check box and either allow the default file name or create your own.

References

Documenting your code with XML comments

Reading the BOM/preamble

Sometimes we get a file with the BOM (or preamble) bytes at the start of the file, which denote a UNICODE encoded file. We don’t always care these and want to simple remove the BOM (if one exists).

Here’s some fairly simple code which shows the reading of a stream or file with code to “skip the BOM” at the bottom

using (var stream = 
   File.Open(currentLogFile, 
      FileMode.Open, 
      FileAccess.Read, 
      FileShare.ReadWrite))
{
   var length = stream.Length;
   var bytes = new byte[length];
   var numBytesToRead = (int)length;
   var numBytesRead = 0;
   do
   {
      // read the file in chunks of 1024
      var n = stream.Read(
         bytes, 
         numBytesRead, 
         Math.Min(1024, numBytesToRead));

      numBytesRead += n;
      numBytesToRead -= n;

   } while (numBytesToRead > 0);

   // skip the BOM
   var bom = new UTF8Encoding(true).GetPreamble();
                    
   return bom.Where((b, i) => b != bytes[i]).Any() ? 
      bytes : 
      bytes.Skip(bom.Length).ToArray();
}

C# 7 Tuples

C#/.NET has had support for the Tuple class (a reference type) since .NET 4, but with C# 7 tuples now gets some “syntactic sugar” (and a new ValueTuple struct) to make them part of the language.

Previous to C# 7 we’d write code like this

var tuple = new Tuple<string, int>("Four", 4);

var firstItem = tuple.Item1;

Now, in a style more like F#, we can write the following

var tuple = ("Four", 4);

var firstItem = tuple.Item1;

This code makes the creation of tuples slightly simpler but we can also make the use of the “items” within a tuple more readable by using names. So we could rewrite the above in a number of ways, examples of each supplied below

(string name, int number) tuple1 = ("Four", 4);
var tuple2 = (name: "Four", number: 4);
var (name, number) =  ("Four", 4);

var firstItem1 = tuple1.name;
var firstItem2 = tuple2.name;
var firstItem3 = name;

Whilst not quite as nice as F# we also now have something similar to pattern matching on tuples, so for example we might be interested in creating a switch statement based upon the numeric (second item) in the above code, we can write

switch (tuple)
{
   case var t when t.number == 4:
      Console.WriteLine("Found 4");
      break;
}

Again, similar to F# we also can discard items/params using the underscore _, for example

var (_, number) = CreateTuple();

In this example we assume the CreateTuple returns a tuple type with two items. The first is simply ignored (or discarded).

The Gherkin language

Gherkin is a DSL used within BDD development. It’s used along with Cucumber which processes the DSL or in the case of .NET we can use tools such as SpecFlow (which I posted about a long time back, see Starting out with SpecFlow) to help generate files and code.

Gherkin allows us to create the equivalent of use cases in a human readable form, using a simple set of keywords and syntax which can then be use to generate a series of method calls to undertake some action and assertion.

Getting started

We can use a standard text editor to create Gherkin’s feature files, but if you prefer syntax highlighting and intellisense (although this language we’re using is pretty simple) then install SpecFlow into Visual Studio or add a Gherkin syntax highlighter into VSCode (for example) or just use your preferred text editor.

I’m going to create a feature file (with the .feature extension) using the SpecFlow item template, so we have a starting point which we can then work through. Here’s the file generated code

Feature: Add a project
	In order to avoid silly mistakes
	As a math idiot
	I want to be told the sum of two numbers

@mytag
Scenario: Add two numbers
	Given I have entered 50 into the calculator
	And I have entered 70 into the calculator
	When I press add
	Then the result should be 120 on the screen

What we have here is a Feature which is meant to describe a single piece of functionality within an application. The feature name should be on the same line as the Feature: keyword.

In this example, we’ve defined a feature which indicates our application will have some way to add a project within our application. We can now add an optional description, which is exactly what the SpecFlow template did for us. The Description may span multiple lines (as can be seen above with the lines under the Feature line being the description) and should be a brief explanation of the specific feature or use case. Whilst the aim is to be brief, it should include acceptance criteria and any relevant information such as user permissions, roles or rules around the feature.

Let’s change our feature text to be a little meaningful to our use case.

Feature: Add a project

	Any user should be able to create/add a new project
	as long as the following rules are met

	1. No duplicate project names can exist
	2. No empty project names should be allowed

I’m sure with a little thought I can come up with more rules, but you get the idea.

The description of the feature is a useful piece of documentation and can be used as a specification/acceptance criteria.

Looking at the generated feature code, we can see that SpecFlow also added @mytag. Tags allow us to group scenarios. In terms of our end tests, this can be seen as a way of grouping features, scenarios etc. Multiple tags may be applied to a single feature or scenario etc. for example

@project @mvp
Feature: Add a project

I don’t need any tags for the feature I’m implementing here, so I’ll delete that line of code.

The Scenario is where we define each specific scenario of a feature and the steps to be taken/expected. The Scenario takes a similar form to a Feature, i.e. Scenario: and then a description of the context.

Following the Scenario line, we then begin to define the steps that make up our scenario, using the keywords Given, When, Then, And and But.

In unit testing usage, we can view Given as a precondition or setup. When, And and But as actions (where But is seen as a negation) and this leaves Then as an assertion.

Let’s just change to use some of these steps in a more meaningful manner within the context of our Scenario

Scenario: Add a project to the project list
	Given I have added a valid project name
	When I press the OK button
	Then the list of projects should now include my newly added project

Eventually, if we generate code from this feature file, each of these steps would get turned into a set of methods which could be used as follows

  • Given becomes an action to initialize the code to the expected context
  • When becomes an action to set-up any variables, etc.
  • Then becomes an assertion or validate any expectations

Multiple When‘s can be defined using the And keyword, for example imagine our Scenario looked like this

Scenario: Add a project to the project list
	Given I have added a valid project name
	When I press the OK button
	And I checked the Allow checkbox
	Then the list of projects should now include my newly added project

Now in addition to the When I press the OK button step I would also get another When created, as the And keyword simply becomes another When action. In essence the And duplicates the previous keyword.

We can also include the But keyword. As we’ve seen And, this is really another way of defining a additional When steps but in a more human readable way, But works in the same way as the And keyword by simply creating another When step in generated code, however But should be viewed as a negation step, for example

Scenario: Add a project to the project list
	Given I have added a valid project name
	When I press the OK button
	But the Do Not Allow checkbox is unchecked
	Then the list of projects should now include my newly added project

Finally, as stated earlier, Then can be viewed as a place to write our assertions or simply check if the results matches our expectations. We can again use And after the Then to create multiple then steps and thus assert multiple expectations, for example

Scenario: Add a project to the project list
	Given I have added a valid project name
	When I press the OK button
	But the Do Not Allow checkbox is unchecked
	Then the list of projects should now include my newly added project
        And the list of project should increase by 1

More keywords

In the previous section we covered the core keywords of Gherkin for defining our features and scenarios. But Gherkin also includes the following

Background, Scenario Outline and Examples.

The Background keyword is used to define reusable Given steps, i.e. if all our scenarios end up requiring the application to be in edit mode we might declare a background before any scenarios, such as this

Background:
	Given the projects list is in edit mode
	And the user clicks the Add button

we’ve now created a sort of, top level scenario which is run before each Scenario.

The Scenario Outline keywords allow us to define a sort of scenario function or template. So, if we have multiple scenarios which only differ in terms of data being used, then we can create a Scenario Outline and replace the specific data points with variables.

For example let’s assume we have scenarios which actually define multiple project names to fulfil the feature’s two rules (we outlined in the feature). Let’s assume we always have a project named “Default” within the application and therefore we cannot duplicate this project name. We also cannot enter a “” project name.

If we write these as two scenarios, then we might end up with the following

Scenario: Add a project to the project list with an empty name
	Given the project name ""
	When I press the OK button
	Then the project should not be added

Scenario: Add a project to the project list with a duplicate name
	Given the project name "Default"
	When I press the OK button
	Then the project should not be added

If we include values within quotation marks or include numbers within our steps, then these will become arguments to the methods generated for these steps. This obviously offers us a way to reuse such steps or use example data etc.

Using a Scenario Outline these could be instead defined as

Scenario Outline: Add a project to the project list with an invalid name
	Given the project name <project-name>
	When I press the OK button
	Then the project should not be added

	Examples: 
	| project-name |
	| ""           |
	| "Default"    |

The <> acts as placeholders and the string within can be viewed as a variable name. We then define Examples which becomes our data inputs to the scenario.

Gherkin also includes the # for use to start a new line and mark it as a comment, multiple lines may be commented out using triple quotation marks, such as “””. Here’s a example of usage

# This is a comment
	
Scenario Outline: Add a project to the project list  with an invalid name
	Given the project name <project-name>
	"""
	Given the project name <project-id>
	"""
	When I press the OK button
	Then the project should not be added

	Examples: 
	| project-name |
	| ""           |
	| "Default"    |

It should be noted that after listing these different keywords and their uses you can also create a scenario that’s a Given followed by a Then, in other words a setup step followed by an assertion, if this is all you need.

For example

Scenario: Add a project
	Given A valid project name
        Then the project list should increase by 1

SpecFlow specifics

On top of the standard Gherkin keywords, SpecFlow adds a few bits.

The @ignore tag is used by SpecFlow to generate ignored test methods.

SpecFlow has also add Scenario Template as a synonym to Scenario Outline. Like wise Scenarios is a alternate to Examples.

Code generation

We’re not going to delve into Cucumber or the SpecFlow generated code except to point out that if you define more than one step within a scenario with the same text, this will generate a call to the same method in code. So whilst you might read a scenario as if it’s a new context or the likes, ultimately the code generated will execute the same method.

References

Gherkin Reference
Using Gherkin Language In SpecFlow

IOException – the process cannot access the file because it is used by another process

I’m using a logging library which is writing to a local log file and I also have a diagnostic tool which allows me to view the log files, but if I try to use File.Open I get the IOException,

“the process cannot access the file because it is used by another process”

this is obviously self-explanatory (and sadly not the first time I’ve had this and had to try and recall the solution).

So to save me searching for it, here the solution which allows me to open a file that’s already opened for writing to by another process

using (var stream = 
   File.Open(currentLogFile, 
      FileMode.Open, 
      FileAccess.Read, 
      FileShare.ReadWrite))
{
   // stream reading code
}

The key to the File.Open line is the FileShare.ReadWrite. We’re interested in opening the file to read but we still need to specify the share flag(s) FileShare.ReadWrite.

Beware those async void exceptions in unit tests

This is a cautionary tale…

I’ve been moving my builds to a new server box and in doing so started to notice some problems. These problems didn’t exist on the previous build box and this might be down to the updates versions of the tools, such as nant etc. I was using on the new box.

Anyway the crux of the problem was I was getting exceptions when the tests were run, nothing told me what assembly or what test, I just ended up with

NullReference exceptions and a stack trace stating this

System.Runtime.CompilerServices.AsyncMethodBuilderCore.b__0(System.Object)

Straight away it looks like async/await issues.

To cut a long story short.

I removed all the Test assemblies, then one by one place them back into the build, running the “test” target in nant each time until I found the assembly with the problems. Running the tests this way also seemed to report more specifics about the area of the exception.

As we know, we need to be particularly careful handling exceptions that occur within an async void method (mine exist because they’re event handlers which are raised upon property changes in a view model).

Interestingly when I ran the Resharper Test Runner all my tests passed, but switching to running in the tests in Debug showed up all the exceptions (ofcourse assuming that you have the relevant “Exceptions” enabled within Visual Studio). The downside of this approach, ofcourse, is that I also see all the exceptions that are intentionally tested for, i.e. ensuring my code exceptions when it should. But it was worth it to solve these problems.

Luckily none of these exceptions would have been a problem in normal running of my application, they were generally down to missing mocking code or the likes, but it did go to show I could be smug in believing my tests caught every issue.

Extending the old WPF drag/drop behavior

A while back I wrote a post of creating A WPF drag/drop target behavior (well really it’s a drop behavior). Let’s extend this and add keyboard paste capabilities and tie it into a view model.

Adding keyboard capabilities

I’ll list the full source at the end of this post, for now I’ll just show changes from my original post.

In the behavior’s OnAttached method add

AssociatedObject.PreviewKeyDown += AssociatedObjectOnKeyDown;

in the OnDetaching method add

AssociatedObject.PreviewKeyDown -= AssociatedObjectOnKeyDown;

the AssociatedObjectOnKeyDown method looks like this

private void AssociatedObjectOnKeyDown(object sender, KeyEventArgs e)
{
   if ((e.Key == Key.V && 
      (Keyboard.Modifiers & ModifierKeys.Control) == ModifierKeys.Control) ||
         (e.Key == Key.V) && (Keyboard.IsKeyDown(Key.LeftCtrl) || 
            Keyboard.IsKeyDown(Key.RightCtrl)))
   {
      var data = Clipboard.GetDataObject();
      if (CanAccept(sender, data))
      {
         Drop(sender, data);
      }
   }
}

Don’t worry about CanAccept and Drop at the moment. As you can see, we capture the preview key down events and if Ctrl+V is being pressed whilst the AssociatedObject has focus, we get the data object from the clipboard, then we want to check if our view model accepts the data, i.e. if we only accept CSV we can fail the paste if somebody tries to drag and image into the view, otherwise we call Drop, which our old has been refactored to also use.

Both the CanAccept and Drop methods need to call into the view model for it to decide whether to accept the data and upon accepting, how to use it, so first we need to define an interface our view model can implement which allows the behavior to call into it, here’s the IDropTarget

public interface IDropTarget
{
   bool CanAccept(object source, IDataObject data);
   void Drop(object source, IDataObject data);
}

Fairly obvious how this is going to work, the behavior will decode the clipbaord/drop event to an IDataObject. The source argument is for situations where we might be dragging from a listbox (for example) to another listbox and want access to the view model behind the drag source.

If we take a look at both the CanAccept method and Drop method on the behavior

private bool CanAccept(object sender, IDataObject data)
{
   var element = sender as FrameworkElement;
   if (element != null && element.DataContext != null)
   {
      var dropTarget = element.DataContext as IDropTarget;
      if (dropTarget != null)
      {
         if (dropTarget.CanAccept(data.GetData("DragSource"), data))
         {
            return true;
         }
      }
   }
   return false;
}

private void Drop(object sender, IDataObject data)
{
   var element = sender as FrameworkElement;
   if (element != null && element.DataContext != null)
   {
      var target = element.DataContext as IDropTarget;
      if (target != null)
      {
         target.Drop(data.GetData("DragSource"), data);
      }
   }
}

As you can see, in both cases we try to get the DataContext of the framework element that sent the event and if it is an IDropTarget we hand off CanAccept and Drop to it.

What’s the view model look like

So a simple view model (which just supplies a property Items of type ObservableCollection) is implemented below

public class SampleViewModel : IDropTarget
{
   public SampleViewModel()
   {
      Items = new ObservableCollection<string>();
   }

   bool IDropTarget.CanAccept(object source, IDataObject data)
   {
      return data?.GetData(DataFormats.CommaSeparatedValue) != null;
   }

   void IDropTarget.Drop(object source, IDataObject data)
   {
       var s = data?.GetData(DataFormats.CommaSeparatedValue) as string;
       if (s != null)
       {
           var split = s.Split(
              new [] { ',', '\r', '\n' }, 
                 StringSplitOptions.RemoveEmptyEntries);
           foreach (var item in split)
           {
              if (!String.IsNullOrEmpty(item))
              {
                 Items.Add(item);
              }
           }
       }
    }

    public ObservableCollection<string> Items { get; private set; }
}

In the above we only accept CSV data, the drop method is very simple and just splits the string into separate parts, each of which is then added to the Items collection.

our XAML (using a Listbox for demo) looks like this

<ListBox ItemsSource="{Binding Items}" x:Name="List">
   <i:Interaction.Behaviors>
      <local:UIElementDropBehavior />
   </i:Interaction.Behaviors>
</ListBox>

Note: the x:Name is here because in MainWindow.xaml.cs (hosting this control) we needed to force focus onto the listbox at startup. Otherwise the control, when empty doesn’t seem to get focus for the keyboard events. Ocourse we might look to use a Focus Behavior

The UIElementDropBehavior in full

public class UIElementDropBehavior : Behavior<UIElement>
{
    private AdornerManager _adornerManager;

    protected override void OnAttached()
    {
        base.OnAttached();

        AssociatedObject.AllowDrop = true;
        AssociatedObject.DragEnter += AssociatedObject_DragEnter;
        AssociatedObject.DragOver += AssociatedObject_DragOver;
        AssociatedObject.DragLeave += AssociatedObject_DragLeave;
        AssociatedObject.Drop += AssociatedObject_Drop;
        AssociatedObject.PreviewKeyDown += AssociatedObjectOnKeyDown;
    }

    protected override void OnDetaching()
    {
        base.OnDetaching();

        AssociatedObject.AllowDrop = false;
        AssociatedObject.DragEnter -= AssociatedObject_DragEnter;
        AssociatedObject.DragOver -= AssociatedObject_DragOver;
        AssociatedObject.DragLeave -= AssociatedObject_DragLeave;
        AssociatedObject.Drop -= AssociatedObject_Drop;
        AssociatedObject.PreviewKeyDown -= AssociatedObjectOnKeyDown;
    }

    private void AssociatedObjectOnKeyDown(object sender, KeyEventArgs e)
    {
        if ((e.Key == Key.V && (Keyboard.Modifiers & ModifierKeys.Control) == ModifierKeys.Control) ||
            (e.Key == Key.V) && (Keyboard.IsKeyDown(Key.LeftCtrl) || Keyboard.IsKeyDown(Key.RightCtrl)))
        {
            var data = Clipboard.GetDataObject();
            if (CanAccept(sender, data))
            {
                Drop(sender, data);
            }
        }
    }

    private void AssociatedObject_Drop(object sender, DragEventArgs e)
    {
        if (CanAccept(sender, e.Data))
        {
            Drop(sender, e.Data);
        }

        if (_adornerManager != null)
        {
            _adornerManager.Remove();
        }
        e.Handled = true;
    }

    private void AssociatedObject_DragLeave(object sender, DragEventArgs e)
    {
        if (_adornerManager != null)
        {
            var inputElement = sender as IInputElement;
            if (inputElement != null)
            {
                var pt = e.GetPosition(inputElement);

                var element = sender as UIElement;
                if (element != null)
                {
                    if (!pt.Within(element.RenderSize) || e.KeyStates == DragDropKeyStates.None)
                    {
                        _adornerManager.Remove();
                    }
                }
            }
        }
        e.Handled = true;
    }

    private void AssociatedObject_DragOver(object sender, DragEventArgs e)
    {
        if (CanAccept(sender, e.Data))
        {
            e.Effects = DragDropEffects.Copy;

            if (_adornerManager != null)
            {
                var element = sender as UIElement;
                if (element != null)
                {
                    _adornerManager.Update(element);
                }
            }
        }
        else
        {
            e.Effects = DragDropEffects.None;
        }
        e.Handled = true;
    }

    private void AssociatedObject_DragEnter(object sender, DragEventArgs e)
    {
        if (_adornerManager == null)
        {
            var element = sender as UIElement;
            if (element != null)
            {
                _adornerManager = new AdornerManager(AdornerLayer.GetAdornerLayer(element), adornedElement => new UIElementDropAdorner(adornedElement));
            }
        }
        e.Handled = true;
    }

    private bool CanAccept(object sender, IDataObject data)
    {
        var element = sender as FrameworkElement;
        if (element != null && element.DataContext != null)
        {
            var dropTarget = element.DataContext as IDropTarget;
            if (dropTarget != null)
            {
                if (dropTarget.CanAccept(data.GetData("DragSource"), data))
                {
                    return true;
                }
            }
        }
        return false;
    }

    private void Drop(object sender, IDataObject data)
    {
        var element = sender as FrameworkElement;
        if (element != null && element.DataContext != null)
        {
            var target = element.DataContext as IDropTarget;
            if (target != null)
            {
                target.Drop(data.GetData("DragSource"), data);
            }
        }
    }
}

Sample Code

DragAndDropBehaviorWithPaste

Unit tests in Go

In the previous post I mentioned that the Go SDK often has unit tests alongside the package code. So what do we need to do to write unit tests in Go.

Let’s assume we have the package (from the previous post)

package test

func Echo(s string) string {
	return s
}

assuming the previous code is in file test.go and we then create a new file in the same package/folder named test_test.go (I know the name’s not great).

Let’s look at the code within this file

package test_test

import "testing"
import "Test1/test"

func TestEcho(t *testing.T) {
	expected := "Hello"
	actual := test.Echo("Hello")

	if actual != expected {
		t.Error("Test failed")
	}
}

So Go’s unit testing functionality comes in the package “testing” and our tests must start with the word Test and takes a pointer to type T. T gives us the methods to create failures etc.

In Gogland you can select the test_test.go file, right mouse click and you’ll see the Run, Debug and Run with coverage options.

How does yield return work in .NET

A while back I had a chat with somebody who seemed very interested in what the IL code for certain C#/.NET language features might look like – whilst it’s something I did have an interest in during the early days of .NET, it’s not something I had looked into since then, so it made me interested to take a look again.

One specific language feature of interest was “what would the IL for yield return look like”.

This is what I found…

Here’s a stupidly simple piece of C# code that we’re going to use. We’ll generate the binary then decompile it and view the IL etc.

public IEnumerable<string> Get()
{
   yield return "A";
}

Using JetBrains dotPeek (ILDASM, Reflector or ILSpy can ofcourse be used to do generate the IL etc.). So the IL created for the above method looks like this

.method public hidebysig instance 
   class [mscorlib]System.Collections.Generic.IEnumerable`1<string> 
      Get() cil managed 
{
   .custom instance void  [mscorlib]System.Runtime.CompilerServices.IteratorStateMachineAttribute::.ctor(class [mscorlib]System.Type) 
   = (
      01 00 1b 54 65 73 74 59 69 65 6c 64 2e 50 72 6f // ...TestYield.Pro
      67 72 61 6d 2b 3c 47 65 74 3e 64 5f 5f 31 00 00 // gram+<Get>d__1..
      )
   // MetadataClassType(TestYield.Program+<Get>d__1)
   .maxstack 8

   IL_0000: ldc.i4.s     -2 // 0xfe
   IL_0002: newobj       instance void TestYield.Program/'<Get>d__1'::.ctor(int32)
   IL_0007: dup          
   IL_0008: ldarg.0      // this
   IL_0009: stfld        class TestYield.Program TestYield.Program/'<Get>d__1'::'<>4__this'
   IL_000e: ret          
} // end of method Program::Get

If we ignore the IteratorStateMachineAttribute and jump straight to the CIL code label IL_0002 it’s probably quite obvious (even if you do not know anything about IL) that this is creating a new instance of some type, which appears to be an inner class (within the Program class) named <Get>d__1. The preceeding ldc.i4.s instruction simply pushes an Int32 value onto the stack, in this instance that’s the value -2.

Note: IteratorStateMachineAttribute expects a Type argument which is the state machine type that’s generated by the compiler.

Now I could display the IL for this new type and we could walk through that, but it’d be much easier viewing some C# equivalent source to get some more readable representation of this. So I got dotPeek to generate the source for this type (from the IL).

First let’s see what changes are made by the compiler to the Get() method we wrote

[IteratorStateMachine(typeof (Program.<Get>d__1))]
public IEnumerable<string> Get()
{
   Program.<Get>d__1 getD1 = new Program.<Get>d__1(-2);
   getD1.<>__this = this;
   return (IEnumerable<string>) getD1;
}

You can now see the newobj in it’s C# form, creating the <Get>d__1 object with a ctor argument of -2 which is assigned as an initial state.

Now let’s look at this lt;Get>d__1 generated code

[CompilerGenerated]
private sealed class <Get>d__1 : 
    IEnumerable<string>, IEnumerable, 
    IEnumerator<string>, IDisposable, 
    IEnumerator
{
   private int <>1__state;
   private string <>2__current;
   private int <>l__initialThreadId;
   public Program <>4__this;

   string IEnumerator<string>.Current
   {
      [DebuggerHidden] get
      {
         return this.<>2__current;
      }
   }

   object IEnumerator.Current
   {
      [DebuggerHidden] get
      {
         return (object) this.<>2__current;
      }
   }

   [DebuggerHidden]
   public <Get>d__1(int <>1__state)
   {
      base..ctor();
      this.<>1__state = param0;
      this.<>l__initialThreadId = Environment.CurrentManagedThreadId;
   }

   [DebuggerHidden]
   void IDisposable.Dispose()
   {
   }

   bool IEnumerator.MoveNext()
   {
      switch (this.<>1__state)
      {
         case 0:
            this.<>1__state = -1;
            this.<>2__current = "A";
            this.<>1__state = 1;
            return true;
          case 1:
            this.<>1__state = -1;
            return false;
          default:
            return false;
      }
   }

   [DebuggerHidden]
   void IEnumerator.Reset()
   {
      throw new NotSupportedException();
   }

   [DebuggerHidden]
   IEnumerator<string> IEnumerable<string>.GetEnumerator()
   {
      Program.<Get>d__1 getD1;
      if (this.<>1__state == -2 && 
          this.<>l__initialThreadId == Environment.CurrentManagedThreadId)
      {
          this.<>1__state = 0;
          getD1 = this;
      }
      else
      {
          getD1 = new Program.<Get>d__1(0);
          getD1.<>4__this = this.<>4__this;
      }
      return (IEnumerator<string>) getD1;
   }

   [DebuggerHidden]
   IEnumerator IEnumerable.GetEnumerator()
   {
      return (IEnumerator) this.System.Collections.Generic.IEnumerable<System.String>.GetEnumerator();
    }
}

As you can see, yield causes the compiler to create enumerable implementation for just that one of code.

Remoting in C# (legacy code)

In another post I looked at remoting using WCF but what were things like before WCF (or what’s an alternative to WCF)? I thought I’d post just the bare bones for getting a really simple service and client up and running.

Note: Microsoft recommends new remoting code uses WCF, so this post is more for understanding legacy code.

Note: I am not going to go into the lifecycle, singletons/single instances etc. here, I’m just going to concentrate on the code to get something working.

The Server

As per my previous post on WCF remoting, we’re going to simply create a console application to act as our server, so go ahead an create one (mine’s named OldRemoteServer) and then add the following

public interface IRemoteService
{
   void Write(string message);
}

into a file of it’s own, then our client can link to it in our client’s Visual Studio solution.

Here’s an implementation for the above

public class RemoteService : MarshalByRefObject, IRemoteService
{
   public void Write(string message)
   {
      Console.WriteLine(message);
   }
}

Note: The implementation needs to be derived from MarshalByRefObject or the class needs to be marked with the Serializable attribute or implement ISerializable which obviously makes sense. We’re solely going to look at MarshalByRefObject implementation for now as these will be passed by reference to the client, whereas the serializable implementations are aimed at passing by value.

Now let’s put some code in the Main method of our Console app.

var channel = new TcpChannel(1002);
ChannelServices.RegisterChannel(channel, false);

RemotingConfiguration.RegisterWellKnownServiceType(
  typeof(RemoteService), 
  "Remote", 
   WellKnownObjectMode.Singleton);

// now let's ensure the console remains open
Console.ReadLine();

In the above code we’re using a Tcp channel with the port number 1002 and in the RegisterWellKnownServiceType, we’re registering the service type (this should be the implementation, not the interface) and supply a name “Remote” for it. In this case I’m also setting it to be a Singleton.

You’ll need to add a reference to System.Runtime.Remoting and the following using clauses

using System.Runtime.Remoting;
using System.Runtime.Remoting.Channels;
using System.Runtime.Remoting.Channels.Tcp;

The Client

Create another Console application to simply test this and add the file with the IRemoteService interface.

I’ve simply added it by selecting the client solution, the Add… | Existing Item, locate the file and instead of clicking the Add button select the dropdown to Add As Link. Then if you change the interface it’ll be immediately reflected in the client – ofcourse in a more complex application we’d have the interfaces in their own assembly.

Now to get the client to call the server we simply place the following in the Main method of the Console app.

var obj = (IRemoteService)Activator.GetObject(
   typeof(IRemoteService), 
   "tcp://localhost:1002/Remote");

obj.Write("Hello World");

In the above you’ll see that we use the Activator to get an object and supply the type, then the next line shows we’re using the Tcp channel, port 1002 as set up in the server and the name from the server we game our object “Remote”.

This creates a transparent proxy which we then simply call the interface method Write on.

Configuration

In the above example code I’ve simply hard-coded the configuration details but ofcourse we can create a .config file to instead handle such configurations.

Let’s replace all the server Main method code with the following

RemotingConfiguration.Configure("Server.config", false);

Add an App.config to the project, we’re going to rename it Server.config for this example and ensure that the files’s properties (in Visual Studio) are set to Copy Always (or Copy if newer) to ensure the copy in the bin folders is upto date.

Now here’s the Server.config which recreates the singleton, tcp, port 1002 settings previously handled in code

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <system.runtime.remoting>
  <application>
    <service>
      <wellknown
        type="OldRemoteServer.RemoteService, OldRemoteServer"
        objectUri="Remote"
        mode="Singleton" />
    </service>
    <channels>
      <channel ref="tcp" port="1002"/>
    </channels>
  </application>

  </system.runtime.remoting>
</configuration>

Now if you run the server and then the client, everything should work as before.

Next up, let’s set the client up to use a configuration file…

So we’ll add an App.config file to the client now, but let’s name it Client.config and again set the Visual Studio properties to ensure it’s always copied to the bin folder.

Add the following to the configuration file

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <system.runtime.remoting>
    <application>
      <client>
        <wellknown 
          type="OldRemoteServer.RemoteService, OldRemoteServer" 
          url="tcp://localhost:1002/Remote" />
      </client>
    </application>
  </system.runtime.remoting>
</configuration>

It might seem a little odd that we’re declaring the type as the implementation within the server code, but the reason will hopefully become clear.

Add a reference within the client to the RemoteServer (if you have the implementation in a DLL, all the better, we didn’t do that, so I’m referencing the server EXE itself). This now give us access to the implementation of the RemoteService.

Change the client Main method to

RemotingConfiguration.Configure("Client.config", false);

var obj = new RemoteService();
obj.Write("Hello World");

don’t forget to add the using clause

using System.Runtime.Remoting;

This bit might seem a little strange, based upon what we’ve previously done and how we’ve kept a separation of interface and implementation. Aren’t we now simply creating a local instance of the RemoteService, you might ask.

Well try it, run the server and then the client and you’ll find .NET has created a transparent proxy for us and calls to the RemoteService will in fact go to the server.

Whilst this makes things very easy, I must admit I prefer to not reference the implementation of the RemoteService.

What about named pipes?

Let’s now look at the changes to use the IPC protocol (for implementing named pipes) in .NET remoting. I’ll just briefly cover the changes required to implement this.

To start with let’s rewrite the server and client in code. So first the Main method in the server should now look like

var channel = new IpcChannel("ipcname");
ChannelServices.RegisterChannel(channel, false);

RemotingConfiguration.RegisterWellKnownServiceType(typeof(RemoteService),
   "Remote",
   WellKnownObjectMode.Singleton);

Console.ReadLine();

So the only real difference from the Tcp implementation is the use of an IpcChannel and the name supplied instead of a port.

The client then looks like this

var obj = (IRemoteService)Activator.GetObject(
   typeof(IRemoteService), 
   "ipc://ipcname/Remote");

obj.Write("Hello World");

Simple enough.

Now let’s change the code to use a configuration file.

The server Main method should now look like this

RemotingConfiguration.Configure("Server.config", false);
Console.ReadLine();

and the Server.config should look like this

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <system.runtime.remoting>
  <application>
    <service>
      <wellknown
        type="OldRemoteServer.RemoteService, OldRemoteServer"
        objectUri="Remote"
        mode="Singleton" />
    </service>
    <channels>
      <channel ref="ipc" portName="ipcname"/>
    </channels>
  </application>

  </system.runtime.remoting>
</configuration>

The client code should be

RemotingConfiguration.Configure("Client.config", false);
var obj = new RemoteService();
obj.Write("Hello World");

and it’s Client.config should look like this

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <system.runtime.remoting>
    <application>
      <client>
        <wellknown 
          type="OldRemoteServer.RemoteService, OldRemoteServer" 
          url="ipc://ipcname/Remote" />
      </client>
    </application>
  </system.runtime.remoting>
</configuration>