Category Archives: Programming

Creating a custom Linq Provider

To create a custom LINQ provider we need to implement the interfaces, IQueryable or IOrderedQueryable and IQueryProvider.

This is a summary of several posts/blogs I’ve read on the subject and source code available on github etc. See References below.

IQueryable<T>

To implement a minimal LINQ provider we need to implement IQueryable<>. This interface inherits from IEnumerable. The actual IQueryable methods are minimal

  1. ElementType
  2. Expression
  3. Provider

IOrderedQueryable<T>

If we want to support the sorting query operators, then we need to implement IOrderedQueryable<T>.

IOrderedQueryable inherits from IEnumerable, IEnumerable<T>, IOrderedQueryable, IQueryable, and IQueryable<T>.

We can implement an IOrderedQueryable that’s reusable for most situations, as follows

public class Queryable<T> : IOrderedQueryable<T>
{
   public Queryable(IQueryContext queryContext)
   {
      Initialize(new QueryProvider(queryContext), null);
   }

   public Queryable(IQueryProvider provider)
   {
     Initialize(provider, null);
   }

   internal Queryable(IQueryProvider provider, Expression expression)
   {
      Initialize(provider, expression);
   }

   private void Initialize(IQueryProvider provider, Expression expression)
   {
      if (provider == null)
         throw new ArgumentNullException("provider");
      if (expression != null && !typeof(IQueryable<T>).
             IsAssignableFrom(expression.Type))
         throw new ArgumentException(
              String.Format("Not assignable from {0}", expression.Type), "expression");

      Provider = provider;
      Expression = expression ?? Expression.Constant(this);
   }

   public IEnumerator<T> GetEnumerator()
   {
      return (Provider.Execute<IEnumerable<T>>(Expression)).GetEnumerator();
   }

   System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator()
   {
      return (Provider.Execute<System.Collections.IEnumerable>(Expression)).GetEnumerator();
   }

   public Type ElementType
   {
      get { return typeof(T); }
   }

   public Expression Expression { get; private set; }
   public IQueryProvider Provider { get; private set; }
}

Note: The IQueryContext is not part of the LINQ interfaces but one I’ve created to allow me to abstract the actual “custom” part of the code and allow maximum reuse of default functionality.

IQueryProvider

The IQueryProvider basically requires two methods (a generic and non-generic version of both) to be implemented. Obviously the generic versions are for strongly typed and the non-generic are loosely typed.

CreateQuery

The CreateQuery method is used to create a new instance of an IQueryable based upon the supplied expression tree.

Execute

The Execute method is used to actually execute a query expression, i.e. in a custom provider we will get the data from the datasource, for example a webservice or database etc. and it’s in the Execute method that we both get the data and do any conversion to, say SQL or the likes (if the LINQ query ultimately queries some other system.

An example of an implementation for an IQueryProvider might look like this…

public class QueryProvider : IQueryProvider
{
   private readonly IQueryContext queryContext;

   public QueryProvider(IQueryContext queryContext)
   {
      this.queryContext = queryContext;
   }

   public virtual IQueryable CreateQuery(Expression expression)
   {
      Type elementType = TypeSystem.GetElementType(expression.Type);
      try
      {
         return                
            (IQueryable)Activator.CreateInstance(typeof(Queryable<>).
                   MakeGenericType(elementType), new object[] { this, expression });
      }
      catch (TargetInvocationException e)
      {
         throw e.InnerException;
      }
   }

   public virtual IQueryable<T> CreateQuery<T>(Expression expression)
   {
      return new Queryable<T>(this, expression);
   }

   object IQueryProvider.Execute(Expression expression)
   {
      return queryContext.Execute(expression, false);
   }

   T IQueryProvider.Execute<T>(Expression expression)
   {
      return (T)queryContext.Execute(expression, 
                 (typeof(T).Name == "IEnumerable`1"));
   }
}

Note: Again, the IQueryContext is not part of the LINQ interfaces but one I’ve created to allow me to abstract the actual “custom” part of the code and allow maximum reuse of default functionality.

The TypeSystem class is taken from the post Walkthrough: Creating an IQueryable LINQ Provider

IQueryContext

The above shows some LINQ interfaces and some sample implementations. I’ve pointed out that IQueryContext is not part of LINQ but is instead something I’ve created (based upon reading various other implementations) to allow me to abstract the actual LINQ provider code specific to my provider’s implementation. Ofcourse we could have derived from QueryProvider, but for now we “plug-in” the data context instead. To change the implementation to derive from QueryProvider simple remove the IQueryContext (also from the Queryable implementation) and override the Execute methods.

For now I’ll continue this post using the IQueryContext, so here’s the interface

public interface IQueryContext
{
   object Execute(Expression expression, bool isEnumerable);
}

Implementing the IQueryContext

Whether the implementation of the actual Execute code (on the IQueryProvider) is abstracted into the IQueryContext or implemented within an alternate implementation of IQueryProvider. This is where the fun of actually “running” the LINQ query against your custom provider takes place.

When writing something like this…

var ctx = new Queryable<string>(new CustomContext())

var query = from s in ctx where s.StartsWith("T") select s;

what we are really doing is creating the query. Hence the CreateQuery methods on the IQueryProvider are called, but the actual data source or whatever supplies the data for your custom LINQ provider should not be called until we reach the execution phase, this is known as deferred execution. The execution phase takes place when we enumerate over the query or call methods, such as Count() etc. against a query.

foreach (var q in query)
{
   Console.WriteLine(q);
}

So at the point that we GetEnumerator implicitly via the foreach loop, that’s when the Execute method is called.

Executing the query

So, the Execute method is called and we will have an expression tree defined by LINQ supplied as an argument. We now need to translate that query into something useful, i.e. turn it into an SQL query for a custom database LINQ provider and then get the data for this query, or get data from a webservice and allow the query to be executed against the returned data etc.

As the actual decoding of the Expression is a fairly large subject in itself, I’ll leave that for another post, but suffice to say, there’s a lot we need to implement to duplicate some if not all of the “standard” LINQ query operators etc.

References

Walkthrough: Creating an IQueryable LINQ Provider
LINQ: Building an IQueryable provider series
LINQ and Deferred Execution

.NET collections refresher

I’ve been writing C#/.NET code for a long time, but every now and then I feel the need to give myself a refresher on the basics and it usually results in finding features that have been added which I knew nothing about. You know how you get used to using classes so look no further. Well here’s my refresher on the .NET collection classes as of .NET 4.5

ArrayList

The ArrayList is a wrapper around a standard object[]. It’s weakly typed and may suffer in performance tests against a generic array type such as a List<T> due to boxing/unboxing.

Unlike a standard object[] declared arrays, multidimensional arrays elements are not directly supported. Although ofcourse you can create an ArrayList or ArrayLists.

BitArray

The BitArray class is a specialized collection for handling an array of bits. It allows us to manipulate the bits in various ways, but lacks bit shifting (for example).

BlockingCollection<T>

The BlockingCollection class wraps an IProducerConsumerCollection<T> to allow thread-safe adding and removing of items. It also offers bounding capabilities.

ConcurrentBag<T>

The ConcurrentBag is a thread-safe bag. Bags are used to store objects with no concern for ordering. They support duplicates and nulls.

ConcurrentDictionary<TKey, TValue>

The ConcurrentDictionary is a thread-safe dictionary.

ConcurrentQueue<T>

The ConcurrentQueue is a thread-safe Queue (FIFO collection).

ConcurrentStack<T>

The ConcurrentStack is a thread-safe Queue (LIFO collection).

Collection<T>

The Collection class is a base class for generic collections. Whilst you can instantiate a Collection<T> directly (i.e. it’s not abstract) it’s designed more from the point of view of extensibility, i.e. deriving you own collection type from it. It only really differs from a List<T>, in that it offersvirtual methods for you to overload to customize your collection, whereas a List<T> doesn’t offer such extensibility.

Dictionary<TKey, TValue>

A key/value generic class. Duplicates are not allowed as keys, nor are nulls. For a thread-safe implementation, look at ConcurrentDictionary<TKey, TValue>.

HashSet<T>

The HashSet<T> is a set collection and thus cannot contain duplicate elements. Elements are not held in any particular order. It is aimed a set type operations, such as IntersectWith, Overlap etc.

Hashtable

The Hashtable stores key/value pairs which are organised based upon the hash code of the key. As per the Dictionary class, keys cannot be null and duplicates are not allowed. Basically it’s a weakly typed Dictionary.

HybridDictionary

The HybridDictionary is a weakly typed Dictionary which can switch between using the ListDictionary for when there’s only a few items stored and then switching to the Hashtable when the collection gets large.

ImmutableDictionary<T>

The ImmutableDictionary is, simply put, an immutable Dictionary.

See also ImmutableInterlocked .

ImmutableHashSet<T>

The ImmutableHashSet is, simply put, an immutable HashSet.

See also ImmutableInterlocked .

ImmutableList<T>

The ImmutableList is, simply put, an immutable List.

See also ImmutableInterlocked .

ImmutableQueue<T>

The ImmutableQueue is, simply put, an immutable Queue.

See also ImmutableInterlocked .

ImmutableSortedDictionary<T>

The ImmutableSortedDictionary is, simply put, an immutable SortedDictionary.

See also ImmutableInterlocked .

ImmutableSortedSet<T>

The ImmutableSortedSet is, simply put, an immutable SortedSet.

See also ImmutableInterlocked .

ImmutableStack<T>

The ImmutableStack is, simply put, an immutable Stack.

See also ImmutableInterlocked .

KeyedCollection<TKey, TItem>

The KeyedCollection is an abstract class which is a hybrid between a collection based upon an IList<T> and an IDictionary<TKey, TItem>. Unlike a Dictionary, the element stored is not a key/value pair. It’s key is embedded in it’s value.

LinkedList<T>

The LinkedList class is really that, a doubly linked list. Obviously this means insertion and removal of items are O(1) operations. Copying of elements requires far less memory than copying of array like structures, and obviously when adding more an more items the underlying data structure itself does not need to resize as an array would need to. On the downside each element requires a next and previous reference along with the data itself plus you do not get O(1) access to elements and this in itself may be a major downside – depending upon your app.

List<T>

The List is in essence this is a generic ArrayList but type safe and due to the use of generics is more efficient in performance terms when handling value types which require boxing/unboxing.

ListDictionary

This ListDictionary is a simple IDictionary implementation. It uses a singly linked list and it’s smaller and faster than a Hashtable if the number of elements is 10 or less.

NameValueCollection

Then NameValueCollection is used to store name/value pairs, however unlike say a dictionary, nulls can be used for both name and value.

ObservableCollection<T>

The ObservableCollection provide notifications when items are added to or removed from the collection or when the collection is refreshed. Used a lot with binding due to the support for INotifyPropertyChanged and INotifyCollectionChanged.

OrderedDictionary

The OrderedDictionary represents key/value data where the data may be accessed via the key or the index.

Queue<T>

A Queue is a FIFO collection. Whereby items are added to the back of the queue (using Enqueue) and taken from the front of the Queue using (Dequeue). A queue ultimately stores it’s elements in an array.

ReadOnlyCollection<T>

The ReadOnlyCollection class is for a generic read-only/immutable collection.

ReadOnlyDictionary<TKey, TValue>

The ReadOnlyDictionary is a class for generic key/value pairs in a read-only/immutable dictionary.

ReadOnlyObservableCollection<T>

The ReadOnlyObservableCollection is a class for a generic read-only/immutable observable collection. Changed may be made to the underlying observable collection but not to the ReadOnlyObservableCollection wrapper.

SortedDictionary<T>

The SortedDictionary is a dictionary where items are sorted on the key.

SortedList<TKey, TValue>

The SortedList is a collecion of key/value pairs that are sorted by key. A SortedList uses less memory than a SortedDictionary, but the SortedDictionary has faster insertion and removal of items.

SortedSet<T>

The SortedSet is a collection where items are maintained in sorted order. Duplicate items are not allowed.

Stack<T>

The Stack is standard LIFO (last-in first-out) collection. Items are pushed onto the top of the Stack and popped off of the top of the stack.

StringCollection

The StringCollection is a specialization of a standard collection for handling strings.

StringDictionary

The StringDictionary is a specialization of a standard dictionary for handling strings.

SynchronizedCollection<T>

The SynchronizedCollection is a thread-safe collection. It’s part of the System.ServiceModel assembly and used within WCF.

SynchronizedKeyedCollection<TKey, TValue>

The SynchronizedKeyedCollection is a thread-safe key/value type collection. It’s abstract, so is designed to be overloaded for your specific needs. . It’s part of the System.ServiceModel assembly and used within WCF.

SynchronizedReadOnlyCollection<T>

The SynchronizedReadOnlyCollection is a thread-safe read-only collection. This is part of the System.ServiceModel assembly.

Dynamic objects in C#

The dynamic keyword was added to C# in .NET 4.0. It was probably primarily added to .NET to support interop with languages such as IronPython, also to aid in interop with COM objects and for handling anonymous types.

What’s a dynamic variable ?

A dynamic variable is not statically typed and can be assigned any type (just like an object) however we can write code that interacts with the type even though we might not know what the type looks like – in other words it’s late bound. What this really means is that we can interact methods, properties etc. even though we might not have the actual type to compile against. Our code will compile correctly even without the known type, due to the dynamic keyword but errors may occur at runtime if the actual type does not have the expected methods, properties etc. in which case we’ll get a RuntimeBinderException exception.

So to make things a little clearer, let’s assume that we have a factory class which creates an object. We don’t know the actual object type but we do know it has a FirstName property – here’s a contrived example of what the code might look like…

public static class Factory
{
   public static object Create()
   {
      return new Person{FirstName = "Edmund", LastName = "Blackadder"};
   }
}

Now, obviously if we knew the type returned was a Person type, we could cast the return to type Person. But for the sake of argument we’ll assume we do not know what’s being returned, all we know is that it has a FirstName property. So we could create a dynamic variable thus

dynamic p = Factory.Create();
Console.WriteLine(p.FirstName);

Notice, we interact with the FirstName property even though we do not know the type that the dynamic variable represents. As stated, at compile time this works just fine and at runtime this will output the string “Edmund”. However, as also stated previously, say we made a spelling mistake and FirstName became FrstName, this would also compile but at runtime would fail with the RuntimeBinderException exception.

We can also store anonymous types (as mentioned at the start of this post) in a dynamic variable and interact with those type’s properties and methods, for example

dynamic p = new {FirstName="Edmund"};
Console.WriteLine(d.FirstName);

Dynamic dynamic variables

We can also define dynamic behaviour at runtime by defining our own dynamic type. So creating dynamic dynamic types (if you like).

For example, one of the cool features of the CsvProvider in F# is that if we have a CSV file with the first row storing the column headings, when we enumerate over the returned rows of data the object we get back will automatically have properties named after the headings with the row data assigned to them. Note: we cannot, as far as I currently know replicate the F# ability to add to intellisense, but let’s see how we might implement similarly late bound column name properties to a type.

So let’s assume we had the following stock data in a CSV file

Symbol,High,Low,Open,Close
MSFT,37.60,37.30,37.35,37.40
GOOG,1190,1181.38,1189,1188
etc.

We might like to read each line from a CsvReader class, maybe it has a ReadLine() method that returns a CsvData object which can change it’s properties depending upon the heading names in the CSV file.

So to implement this CsvData class by deriving it from the DynamicObject and this will now allow us to dynamically create properties at runtime. This is similar to implementing dynamic proxies.

public class CsvData : DynamicObject
{
   private CsvReader reader;
   
   public CsvData(CsvReader reader)
   {
      this.reader = reader;
   }

   public override bool TryGetMember(GetMemberBinder binder, out object result)
   {
      result = null;

      if(reader.Headers.Contains(binder.Name))
      {
          result = reader.Headers[binder.Name];
          return true;
      }
      return false;
   }
}

So the above code assumes the CsvReader has some collection type from which we can get the headers (and check whether a header name exists).

Basically if we use the following code

CsvReader reader = new CsvReader("stocks.csv");

dynamic stock = reader.ReadLine()
Console.WriteLine(
   String.Format("Symbol {0} High {1} Low {2}", 
      stock.Symbol, stock.High, stock.Low));

we can access properties which are created at runtime.

Obviously if we were to change the CSVReader to read data from a different type of data, maybe something like a list of CD data, for example

Artist,Album,Genre
Alice Cooper,Billion Dollar Babies,Rock
Led Zeppelin,Houses of the Holy,Rock

we can still use the same CsvReader and CsvData code to access the new properties. In this case our dynamic variable will use properties such as cd.Artist, cd.Album, cd.Genre etc.

Dynamics in LINQ

Instead of a ReadLine method on the CsvReader, what if it implemented an IEnumerator interfaces, we could now use the CsvReader with LINQ using the following

CsvReader reader = new CsvReader("stocks.csv");

var data = from dynamic line in reader
               where line.Close > line.Open
                  select line;

foreach(var stock in data)
{
   Console.WriteLine(
      String.Format("Symbol {0} High {1} Low {2}", 
         stock.Symbol, stock.High, stock.Low));
}

Notice the use of the dynamic keyword before the variable line. Without this the line.Close and line.Open will cause a compile time error.

Note: the above example of course assumes that line.Close and line.Open are both numerical values

yield return inside a using block

This a short post to act as a reminder.

I decided to refactor some code to return IEnumerable (using yield return) instead of returning an IList. So I have a method named Deserialize which creates a memory stream and then passes it to an overload which takes a Stream. For example

public IEnumerable<T> Deserialize(string data)
{
   using (MemoryStream memoryStream = new MemoryStream())
   {
      memoryStream.Write(Encoding.ASCII.GetBytes(data), 0, data.Length);
      memoryStream.Seek(0, SeekOrigin.Begin);

      return Deserialize(memoryStream);
   }
}

public IEnumerable<T> Deserialize(Stream stream)
{
   // does some stuff
   yield return item;
}

The unit tests thankfully spotted straight away a simple mistake. In that the using clause is exited before the first yield return takes place. So the stream is disposed of, and therefore invalid, when the overload tried to access it. Doh !

Thankfully it’s easy to fix using the following

public IEnumerable<T> Deserialize(string data)
{
   using (MemoryStream memoryStream = new MemoryStream())
   {
      memoryStream.Write(Encoding.ASCII.GetBytes(data), 0, data.Length);
      memoryStream.Seek(0, SeekOrigin.Begin);

      foreach (var item in Deserialize(memoryStream))
      {
         yield return item;
      }   
   }
}

Obviously we could have alternatively used something like

return new List<T>(Deserialize(memoryStream));

but this then negates the purpose of using the yield return in the first place.

F# Type Providers

The FSharp.Data library introduces several TypeProviders. At the time of writing the general purpose TypeProviders include one for XML, JSON and CVS along with TypeProviders specific to Freebase and Worldbank.

Either add a reference to FSharp.Data or use NuGet to install FSharp.Data to work through these examples.

CsvTypeProvider

The CsvTypeProvider (as the name suggests) allows us to interact with Csv data. Here’s some code based upon the Titanic.csv Csv file supplied with FSharp.Data’s source via GitHub.

open FSharp.Data

type TitanicData = CsvProvider<"c:\dev\Titanic.csv", SafeMode=true, PreferOptionals=true>

[<EntryPoint>]
let main argv =     

    let ds = TitanicData.Load(@"c:\dev\Titanic.csv")

    for i in ds.Data do
        printfn "%s" i.Name.Value 

    0 // return an integer exit code

Firstly we open the FSharp.Data namespace, then create a type abbreviation, to the CsvProvider. We pass in the sample data file so that the provider can generate it’s schema.

We can ignore the main function and return value and instead jump straight to the let ds = … where we load the actual data that we want to parse. The type abbreviation used this same file to generate the schema but in reality we could now load any file that matches the expected titanic file schema.

We can now enumerate over the file data using ds.Data and best still is that within Visual Studio we can now use intellisense to look at the headings generated by the CsvProvider, which now appear as properties of the type. i.e. i.Name is an Option type supplied by the schema, hence if we’d had a heading in the Csv named “Passenger Id” we’d now have Passenger showing as a property on the instance I within the loop and hence be able to write like printfn “%i” i.“Passenger Id“.Value.

Redirecting standard output using Process

If we are running a console application, we might wish to capture the output for our own use…

Asynchronous output

By default any “standard” output from an application run via the Process class, is output to it’s own window. To capture this output for our own use we can write the following…

using (var p = new Process())
{
   p.StartInfo = new ProcessStartInfo("nant", "build")
   {
      UseShellExecute = false,
      RedirectStandardOutput = true
   };

   p.OutputDataReceived += OnOutputDataReceived;
   p.Start();
   p.BeginOutputReadLine();
   p.WaitForExit();
   return p.ExitCode;
}

The above is part of a method that returns an int and runs nant with the argument build. I wanted to capture the output from this process and direct to my own console window.

The bits to worry about are…

  1. Set UseShellExecute to false
  2. Set RedirectStandardOutput to true
  3. Subscribe to the OutputDataReceived event
  4. Call BeginOutputReadLine after you start the process

Synchronous output

If you want to do a similar thing but capture all the output when the process ends, you can use

using (var p = new Process())
{
   p.StartInfo = new ProcessStartInfo("nant", "build")
   {
      UseShellExecute = false,
      RedirectStandardOutput = true
   };

   p.Start();
   p.WaitForExit();

   using (StreamReader reader = p.StandardOutput)
   {
      Console.WriteLine(reader.ReadToEnd());
   }
   return p.ExitCode;
}

In this synchronous version we do not output anything until the process completes, then we read for the process’s StandardardOutput. We still require UserShellExecute and RedirectStandardOutput to be setup as previously outlined, but no longer subscribe to any events or use BeginOutputReadLine to start the output process.

ReactiveUI 5.x changes

I’m not sure when these changes were made, but I just updated an application to use both .NET 4.5 and the latest NuGet packages for ReactiveUI (version 5.4) and looks like quite a few things have changed.

This post is just going to show some of the changes I’ve hit whilst migrating my code. This is probably not a full list as I haven’t used all the parts of ReactiveUI in my applications thus far.

RaiseAndSetIfChanged

// old style
this.RaiseAndSetIfChanged(x => x.IsSelected, ref isSelected, value);

// new style
this.RaiseAndSetIfChanged(ref isSelected, value);

See the post on CallerMemberNameAttribute for more information on how this works. With this we no longer need to write the lambda expressions required by earlier versions.

Note: For raising events on properties other than the current property, i.e. if IsSelected is a property above and IsUpdated should also raise a property change event we use

this.RaiseAndSetIfChanged("IsUpdated");


RxApp.DeferredScheduler

It would appear that the RxApp.DeferredScheduler has been renamed as RxApp.MainThreaScheduler. Admittedly a better name.

RxApp.MessageBus

Looks like we just use the MessageBux.Current instead.

ReactiveAsyncCommand

It appears that ReactiveAsyncCommand has merged into ReactiveCommand, so changes look like this

// old stype
ClearSearch = new ReactiveAsyncCommand(null);
ClearSearch.RegisterAsyncAction(o =>
{
   // do something
});

// new style
ClearSearch = new ReactiveCommand();
ClearSearch.RegisterAsyncAction(o =>
{
   // do something
});

but there’s more

// old style
OK = ReactiveCommand.Create(_ => true, DoOK);

// new style
OK = new ReactiveCommand();
OK.Subscribe(DoOK);

If we need to enable/disable commands then we can pass an IObservable into the first parameter of ReactiveCommand, for example

OK = new ReactiveCommand(this.WhenAny(x => x.Text, x => !String.IsNullOrEmpty(x.Value)));


IReactiveAsyncCommand

As above, IReactiveAsyncCommand has been merged into IReactiveCommand.

ReactiveCollection

This has been renamed to ReactiveList.

Supporting initialization using the ISupportInitialize

This post doesn’t really explain anything too exciting, it’s more a reminder to myself on the existence and possible use of this interface

Admittedly the ISupportInitialize interface tends to be thought of, almost exclusively at times, as an interface used by UI controls/components. In that if you look at the code in the designer.cs file you may find controls which support ISupportInitialize, for example

((System.ComponentModel.ISupportInitialize)(this.grid)).BeginInit();

this.grid.Name = "grid";
this.grid.Size = new System.Drawing.Size(102, 80);
this.grid.TabIndex = 5;

((System.ComponentModel.ISupportInitialize)(this.grid)).EndInitInit();

But this interface is, ofcourse, not limited to controls/components. It’s defined for “simple, transacted notification for batch initialization” so why not reuse it for initialization of other code if need be.

In situations where your code might be getting setup/initialized and you maybe not wanting events to fire or the likes during the initialization phase, why not just implement the ISupportInitialize interface and use the BeginInit, EndInit pattern.

How about creating a simple helper class to automatically call a classes BeginInit and EndInit methods if it supports the ISupportInitialize

public class SupportInitialization : IDisposable
{
   private ISupportInitialize initSupported;
   public SupportInitialization(object o)
   {
      initSupported = o as ISupportInitialize;
      if (initSupported != null)
         initSupported.BeginInit();
   }

   public void Dispose()
   {
      if (initSupported != null)
         initSupported.EndInit();
   }
}

Is my application’s UI still responsive ?

We might, on occasion, need to check whether an application we’ve spawned has a responsive UI or ofcourse we might wish to check our own application, for example within a timer polling the following code to check the application is still responsive.

Firstly, we could use the property Responding on the Process class which allows us to check whether a process’s UI is responsive.

Process p = Process.Start("SomeApplication.exe");

if(!p.Responding)
{
   // the application's ui is not responding
}

// on the currently running application we might use 

if(!Process.GetCurrentProcess().Responding)
{
   // the application's ui is not responding
}

Alternatively we could use the following code, which allows us to set a timeout value

public static class Responsive
{
   /// <summary>
   /// Tests whether the object (passed via the ISynchronizeInvoke interface) 
   /// is responding in a predetermined timescale (the timeout). 
   /// </summary>
   /// <param name="si">An object that supports ISynchronizeInvoke such 
   /// as a control or form</param>
   /// <param name="timeout">The timeout in milliseconds to wait for a response 
   /// from the UI</param>
   /// <returns>True if the UI is responding within the timeout else false.</returns>
   public static bool Test(ISynchronizeInvoke si, int timeout)
   {
      if (si == null)
      {
         return false;
      }
      ManualResetEvent ev = new ManualResetEvent(false);
      si.BeginInvoke((Action)(() => ev.Set()), null);
      return ev.WaitOne(timeout,false);
   }
}

In usage we might write the following

if (IsHandleCreated)
{
   if (!Responsive.Test(this, reponseTestDelay))
   {
       // the application's ui is not responding
   }
}

Hosting IronRuby in a C# application

After writing the post Hosting IronPython in a C# application I decided to see how easy it was to host IronRuby in place of IronPython.

Before we go any further let’s setup a project. Create a console application (mine’s just named ConsoleApplication) and then, using NuGet, download/reference IronRuby. The version I have is 1.1.3.

Allowing our scripts to use our assembly types

Let’s look at some code

public class Application
{
   public string Name { get { return "MyApp"; } }
}

class Program
{
   static void Main(string[] args)
   {
      string code = "app = ConsoleApplication::Application.new\n" + 
                    "puts app.Name";

      ScriptEngine engine = Ruby.CreateEngine();
      ScriptScope scope = engine.CreateScope();

      engine.Execute("require 'mscorlib'", scope);
      engine.Execute("require 'ConsoleApplication'", scope);

      ScriptSource source = engine.CreateScriptSourceFromString(code, 
                     SourceCodeKind.Statements);
      source.Execute(scope);
   }
}

Here, as per the IronPython post, we create some scripting code, obviously this time in Ruby. Next we create an instance of the Ruby engine and create the scope object. We then execute a couple of commands to “require” a couple of assemblies, including our ConsoleApplication assembly. The next two lines will also be familiar if you’ve looked at the IronPython post, the first of the two creates a ScriptSource object from the supplied Ruby code and the next line executes the code.

All looks good, except I was getting the following exception

An unhandled exception of type ‘System.MissingMethodException’ occurred in Microsoft.Dynamic.dll

Additional information: Method not found: ‘Microsoft.Scripting.Actions.Calls.OverloadInfo[] Microsoft.Scripting.Actions.Calls.ReflectionOverloadInfo.CreateArray(System.Reflection.MemberInfo[])’.

I found the following post IronRuby and the Dreaded Method not found error which demonstrated a solution to the problem.

If we insert the following line of code, after the last engine.Execute line

engine.Execute("class System::Object\n\tdef initialize\n\tend\nend", scope);

or in Ruby code we could have written

class System::Object
   def initialize
   end
end

this will solve the problem – I don’t have any explanation for this, at this time, but it does work.

What if we already have an instance of an object in our hosting application that we want to make available to IronRuby ?

As per the IronPython example, we can simply set a variable up on the scope object and access the variable from our script, as per

string code = "puts host.Name";

ScriptEngine engine = Ruby.CreateEngine();
ScriptScope scope = engine.CreateScope();

scope.SetVariable("host", new Application());

engine.Execute("require 'mscorlib'", scope);
engine.Execute("require 'ConsoleApplication1'", scope);

ScriptSource source = engine.CreateScriptSourceFromString(code,        
          SourceCodeKind.Statements);
source.Execute(scope);


Using a dynamic to create our variables

Even cooler than creating the variables using SetVariable on the scope object, if we change our scope code to look like the following

dynamic scope = engine.CreateScope();

we can add the variables directly to the scope dynamic variable, for example

scope.host = new Application();