Scientist in the making (aka using Science.NET)

When we’re dealing with refactoring legacy code, we’ll often try to ensure the existing unit tests (if they exist) or new ones cover as much of the code as possible before refactoring it. But there’s always a concern about turning off the old code completely until we’ve got a high confidence in the new code. Obviously the test coverage figures and unit tests themselves should give us that confidence, but wouldn’t it by nice to maybe we instead ran the old and new code in parallel and compare the behaviour or at least the results of the code? This is where the Scientist library comes in.

Note: This is very much (from my understanding) in an alpha/pre-release stage of development, so any code written here may differ from the way the library ends up working. So basically what I’m saying is this code works at the time of writing.

Getting started

So the elevator pitch for Science.NET is that it “allows us to two difference implementations of code, side by side and compare the results”. Let’s expand on that with an example.

First off, we’ll set-up our Visual Studio project.

  • Create a new console application (just because its simple to get started with)
  • From the Package Manager Console, execute Install-Package Scientist -Pre

Let’s start with a very simple example, let’s assume we have a method which returns a numeric value, we don’t really need to worry much about what this value means – but if you like a back story, let’s assume we import data into an application and the method calculates the confidence that the data matches a known import pattern.

So the legacy code, or the code we wish to verify/test against looks like this

public class Import
   public float CalculateConfidenceLevel()
       // do something clever and return a value
       return 0.9f;

Now our new Import class looks like this

public class NewImport
   public float CalculateConfidenceLevel()
      // do something clever and return a value
      return 0.4f;

Okay, okay, I know the result is wrong, but this is mean’t to demonstrate the Science.NET library not my Import code.

Right, so what we want to do is run the two versions of the code side-by-side and see whether the always give the same result. So we’re going to simply run these in our console’s Main method for now but ofcourse the idea is this code would be run from wherever you currently run the Import code from. For now just add the following to Main (we’ll discuss strategies for running the code briefly after this)

var import = new Import();
var newImport = new NewImport();

float confidence = 
      "Confidence Experiment", experiment =>
      experiment.Use(() => import.CalculateConfidenceLevel());
      experiment.Try(() => newImport.CalculateConfidenceLevel());

Now, if you run this console application you’ll see the confidence variable will have the value 0.9 in it as it’s used the .Use code as the result, but the Science method (surely this should be named the Experiment method :)) will actually run both of our methods and compare the results.

Obviously as both the existing and new implementations are run side-by-side, performance might be a concern for complex methods, especially if running like this in production. See the RunIf method for turning on/off individual experiments if this is a concern.

The “Confidence Experiment” string denotes the name of the comparison test and can be useful in reports, but if you ran this code you’ll have noticed everything just worked, i.e. no errors, no reports, nothing. That’s because at this point the default result publisher (which can be accessed via Scientist.ResultPublisher) is an InMemoryResultPublisher we need to implement a publisher to output to the console (or maybe to a logger or some other mechanism).

So let’s pretty much take the MyResultPublisher from but output to console, so we have

 public class ConsoleResultPublisher : IResultPublisher
   public Task Publish<T>(Result<T> result)
          $"Publishing results for experiment '{result.ExperimentName}'");
      Console.WriteLine($"Result: {(result.Matched ? "MATCH" : "MISMATCH")}");
      Console.WriteLine($"Control value: {result.Control.Value}");
      Console.WriteLine($"Control duration: {result.Control.Duration}");
      foreach (var observation in result.Candidates)
         Console.WriteLine($"Candidate name: {observation.Name}");
         Console.WriteLine($"Candidate value: {observation.Value}");
         Console.WriteLine($"Candidate duration: {observation.Duration}");

      if (result.Mismatched)
         // saved mismatched experiments to DB

      return Task.FromResult(0);

Now insert the following before the float confidence = line input our Main method

Scientist.ResultPublisher = new ConsoleResultPublisher();

Now when you run the code you’ll get the following output in the console window

Publishing results for experiment 'Confidence Experiment'
Control value: 0.9
Control duration: 00:00:00.0005241
Candidate name: candidate
Candidate value: 0.4
Candidate duration: 00:00:03.9699432

So now you’ll see where the string in the Science method can be used.


Checkout the documentation on of the source itself for more information.

Real world usage?

First off let’s revisit how we might actually design our code to use such a library. The example was created from scratch to demonstrate basic use of the library, but it’s more likely that we’d either create an abstraction layer which instantiates and executes the legacy and new code or if available add the new method to the legacy implementation code. So in an ideal worlds our Import and NewImport methods might implement an IImport interface. Thus it would be best to implement a new version of this interface and within the methods call the Science code, for example

public interface IImport
   float CalculateConfidenceLevel();

public class ImportExperiment : IImport
   private readonly IImport import = new Import();
   private readonly IImport newImport = new Import();

   public float CalculateConfidenceLevel()
      return Scientist.Science<float>(
         "Condfidence Experiment", experiment =>
            experiment.Use(() => import.CalculateConfidenceLevel());
            experiment.Try(() => newImport.CalculateConfidenceLevel());

I’ll leave the reader to put the : IImport after the Import and NewImport classes.

So now our Main method would have the following

Scientist.ResultPublisher = new ConsoleResultPublisher();

var import = new ImportExperiment();
var result = import.CalculateConfidenceLevel();

Using an interface like this now means it’s both easy to switch from the old Import to the experiment implementation and eventually to the new implementation, but then hopefully this is how we always code. I know those years of COM development make interfaces almost the first thing I write along with my love of IoC.

And more…

Comparison replacement

So the simple example above demonstrates the return of a primitive/standard type, but what if the return is one of our own more complex objects and therefore more complex comparisons? We can implement an

experiment.Compare((a, b) => a.Name == b.Name);

ofcourse we could hand this comparison off to a more complex predicate.

Unfortunately the Science method expects a return type and hence if your aim is to run two methods with a void return and maybe test some encapsulated data from the classes within the experiment, then you’ll have to do a lot more work.

Toggle on or off

The IExperiment interface which we used to call .Use and .Try also has the method RunIf which I mentioned briefly earlier. We might wish to write our code in such a way that the dev environment runs the experiments but production does not, ensuring our end user’s do not suffer performances hits due to the experiment running. We can use RunIf in the following manner

experiment.RunIf(() => !environment.IsProduction);

for example.

If we needed to include this line in every experiment it might be quite painful, so it’s actually more likely we’d use this to block/run specific experiments, so maybe we run all experiments in all environment, except one very slow experiment.

To enable/disable all experiments, instead we can use

Scientist.Enabled(() => !environment.IsProduction);

Note: this method is not in the NuGet package I’m using but is in the current source on GitHub and in the documentation so hopefully it works as expected in a subsequent release of the NuGet package.

Running something before an experiment

We might need to run something before an experiment starts but we want the code within the context of the experiment, a little like a test setup method, we can use

experiment.BeforeRun(() => BeforeExperiment());

in the above we’ll run some method BeforeExperiment() before the experiment continues.


I’ve not covered all the currently available methods here as the repository already does that, but hopefully I’m given a peek into what you might do with this library.

NPOI saves the day


NPOI is a port of POI for .NET. You know how we in the .NET side like to prefix with N or in the case of JUnit, change J to N for our versions of Java libraries.

NPOI allows us to write Excel files without Excel needing to be installed. By writing files directly it also gives us, speed, less likelihood or us leaving a Excel COM/Automation object in memory and basically a far nicer API.

So how did NPOI save the day?

I am moving an application to WPF and in doing so the third party controls also moved from WinForms to WPF versions. One, a grid control, used to have a great export to Excel feature which output the data in a specific way, unfortunately the WPF version did not write the Excel file in the same format. I was therefore tasked with re-implementing the Excel exporting code. I began with Excel automation which seemed slow and I found it difficult getting the output as we wanted. I then tried a couple of Excel libraries for writing the BIFF format (as used by Excel). Unfortunately these didn’t fully work and/or didn’t do what I needed. Then one of my Java colleagues mentioned POI and checked for an N version of POI, and there it was NPOI. NPOI did everything we needed, thus saving the day.

Let’s see some code

Okay usual prerequisites are

  • Create a project or whichever type you like
  • Using NuGet add the NPOI package

Easy enough.

Logically enough, we have workbooks at the top level with worksheet’s within a workbook. Within the worksheet we have rows and finally cells within the rows, all pretty obvious.

Let’s take a look at some very basic code

var workbook = new XSSFWorkbook();
var worksheet = workbook.CreateSheet("Sheet1");

var row = worksheet.CreateRow(0);
var cell = row.CreateCell(0);

cell.SetCellValue("Hello Excel");

using (var stream = new FileStream("test.xlsx", FileMode.Create, FileAccess.Write))


The above should be pretty self explanatory, after creating the workbook etc. we write the workbook to a file and then using Process, we get Excel to display ht file we’ve created.

Autosizing columns

By default you might feel the columns are too thin, we can therefore iterate over the columns after setting our data and run

for (var c = 0; c < worksheet.GetRow(0).Cells.Count; c++)

The above code is simply looping over the columns (I’ve assumed row 0 holds headings for each column – as it were#) and telling the worksheet o auto-size them.

Grouping rows

One thing we have in our data is a need to show parent child relationships in the Excel spreadsheet. Excel allows us to do this by “grouping” rows. For example, if we have


We’d like to show this in Excel in collapsible rows, like a treeview. As such we want the child curves to be within the group so we’d see something like this


or expanded


to achieve this in NPOI (assuming Parent is row 0) we would group row’s 1 and 2, i.e.

worksheet.GroupRow(1, 2);
//if we want to default the rows to collapsed use
worksheet.SetRowGroupCollapsed(1, true);

finally for grouping, the +/- button by default displays at the bottom of the grouping which I always found a little strange, so to have this display at the top of the group we set this via

worksheet.RowSumsBelow = false;

Date format

You may wish to customise the way DateTime’s are displayed, in which case we need to apply a style to the cell object, for example, let’s display the DateTime in the format dd mm yy hh:mm

var creationHelper = workbook.GetCreationHelper();

var cellStyle = workbook.CreateCellStyle();
cellStyle.DataFormat = creationHelper
   .GetFormat("dd mmm yy hh:mm");
cellStyle.Alignment = HorizontalAlignment.Left;

// to apply to our cell we use
cell.CellStyle = cellStyle;


Adding a WebApi controller to an existing ASP.NET MVC application

So I’ve got an existing ASP.NET MVC5 application and need to add a REST api using WebApi.

  • Add a new Controller
  • Select Web API 2 Controller – Empty (or whatever your preference is)
  • Add you methods as normal
  • Open Global.asax.cs and near the start, for example after AreaRegistration but before the route configuration, add

easy enough. The key is to not put the GlobalConfiguration as the last line in the Global.asax.cs as I did initially.

If we assume your controller was named AlbumsController, it might looks something like this

public class AlbumsController : ApiController
   // api/albums
   public IEnumerable<Album> GetAllAlbums()
      // assuming albums is populated 
      // with a list of Album objects
      return albums;

as per the comment, access to the API will be through url/api/albums, see WebApiConfig in App_Start for the configuration of this URL.

Passing arguments to an ASP.NET MVC5 controller

In our controller we might have a method along the lines

public string Search(string criteria, bool ignoreCase = true)
   // do something useful
   return $"Criteria: {criteria}, Ignore Case: {ignoreCase}";

Note: I’ve not bothered using HttpUtility.HtmlEncode on the return string as I want to minimize the code for these snippets.

So we can simply create a query string as per


or we can add/change the routing in RouteConfig, so for example in RouteConfig, RegisterRoutes we add

   name: "Music",
   url: "{controller}/{action}/{criteria}/{ignoreCase}"

now we can compose a URL thus


Note: the routing names /{criteria}/{ignoreCase} must have the same names as the method parameters.

Obviously this example is a little contrived as we probably wouldn’t want to create a route for such a specific method signature.

We might simply incorporate partial parameters into the routine, for example maybe all our MusicController methods took a citeria argument then we might use

   name: "Music",
   url: "{controller}/{action}/{criteria}"

Note: there cannot be another route with the same number of parameters in the url preceding this or it will not be used.

and hence our URL would like like



This should be a nice short post.

As I use IoC a lot in my desktop applications I also want similar capabilities in an ASP.NET MVC application. I’ll use Unity as the container initally.

  • Create a new project using the Templates | Web | ASP.NET Web Application option in the New Project dialog in Visual Studio, press OK
  • Next Select the MVC Template and change authentication (if need be) and check whether to host in the cloud or not, then press OK
  • Select the References section in your solution explorer, right mouse click and select Manage NuGet Packages
  • Locate the Unity.Mvc package and install it

Once installed we need to locate the App_Start/UnityConfig.cs file and within the RegisterTypes method we add our mappings as usual, i.e.

container.RegisterType<IServerStatus, ServerStatus>();

There are also other IoC container NuGet packages including NInject (NInject.MVCx), with this we simply install the package relevent to our version of MVC, for example NInject.MVC4 and now we are supplied with the App_Start/NinjectWebCommon.cs file where we can use the RegisterServices method to register our mappings, i.e.



See Extending NerdDinner: Adding MEF and plugins to ASP.NET MVC for information on using MEF with ASP.NET.

SignalR 2

Well it’s about time I revisited SignalR. I’m working on a little side project to act as a intranet dashboard for one of the applications I support.

The idea is to produce those commonly asked for pieces of information about the current state of the application, infrastructure etc. into a usable web page.

So first up I want a way to update the dashboard when changes are detected (or found when polling at specific periods) to the dashboard. As I have written something a little like this in the past with SignalR I thought it’d be good to see where the technology stood now.

So we’re going to create a bare minimum ASP.NET MVC application with SignalR periodically causing updates to the page – ofcourse there’s plenty of chat samples for this, so if you’re looking for something a little more in depth, please go and check those out.

Getting started

I’m using Visual Studio 2015 and will be creating an ASP.NET MVC5 web application for this (ofcourse you don’t need to got full ASP.NET MVC for this, but I want to integrate this into my MVC app).

  • Create a new project in VS2015, select Visual C#/Web/ASP.NET Application (available as a .NET 4.5 or above template) – my project is called SignalRTest
  • After you press OK, select MVC from the next screen and press OK
  • Add a new folder named Hubs (not required but keeps the code partitioned nicely)
  • Right mouse click on the Hubs folder and select Add | New Item
  • Select Visual C#/Web/SignalR
  • Select SignalR Hub Class (v2) and give it a meaningful name, mine is ServerStatusHub
  • If you have a Startup.cs file (this will be supplied if you have authorization enabled), then simple add the following line to the Configuration method
  • If you chose no authentication then you’ll need to create a Startup.cs file in the root folder (alongside the Web.Config in your solution), it should look like this
    using Microsoft.Owin;
    using Owin;
    [assembly: OwinStartupAttribute(typeof(SignalRTest.Startup))]
    namespace SignalRTest
        public partial class Startup
            public void Configuration(IAppBuilder app)

Let’s pause and take stock, we now have a Hub derived object named ServerStatusHub which will be callable from JavaScript (in the application we’re writing) and will equally be able to call out to client JavaScript code as and when server updates come in.

We’re going to change the code in the hub to this

using System;
using System.Threading;
using Microsoft.AspNet.SignalR;

namespace SignalRTest.Hubs
    public class ServerStatusHub : Hub
        private readonly Timer timer;

        public ServerStatusHub()
            timer = new Timer(state => Refresh());

        public void Refresh()

Note: this is not great code as the timer will just keep going until the application closes.

So this will both simulate the equivalent of events coming from some external trigger (in this case the timer) which will be received and processed on the web application (client) and it also allows the code to be called as a server to initiate a refresh of the status, i.e. via a button click (for example).

Open the Index.cshtml file in the Views/Home folder of the solution and remove all the divs leaving the following

    ViewBag.Title = "Home Page";

Now add to the Index.cshtml, the following

    <div id="serverStatus"></div>
<button id="serverRefresh">Refresh</button>

@section scripts {
    <script src="~/Scripts/jquery.signalR-2.1.2.min.js"></script>
    <script src="~/signalr/hubs"></script>
        $(function() {

            try {
                var serverStatus = $.connection.serverStatusHub;

                serverStatus.client.refresh = function(status) {

                    .click(function() {

            } catch (sourceError) {

                .done(function() {
                .fail(function() {
                    $('#serverStatus').html('Failed to start server hub');

        function htmlEncode(value) {
            return $('<div />').text(value).html();

Don’t worry about the blue squiggle line say thar ~/signalr/hibs could not be found, the proxies will created when the application runs. If you do want to see what’s created you can run the application and navigate to the folder (i.e. http://localhost:56433/signalr/hubs) and see the proxies.

So we’re creating the SignalR JavaScript code against the proxies and hence have an object named serverStatusHub.

Whilst the methods and types are Pascal case in the C# code we use Camel case for the JavaScript.

The code above simply creates a connection to the server status hub and then we create a client method (equivalent to a callback) where we’ll recieve updates from the hub as they come in. We’ll simply output these to the HTML page.

We also hook up to the button serverRefresh so the user can call the hub to get the latest status of the servers in our imaginary application. The rest of this section of code is error handling code, but it’s following (after the catch block) with the code to connect to the hub and start the SignalR hub up.

And that’s all there is to it.

Returning to Entity Framework database first

After working with a database project in Visual Studio, I thought it was probably time to create a simple console application to interact with the database using the current version of Entity Framework (v6.0).

So as we’ve already created the cddb database in a previous post, we’ll simply create a new console project and work with that DB.

  • Create your application, as stated mine is a console application
  • Add new item and select Data | ADO.NET Entity Data Model, mine’s named CddbContext (as this will include the source for the data context created for EF)
  • Select Code First from database
  • Create a new connection and supply the relevant details for your database connection
  • Press next then select the tables (and views) you want to generate code for – then click Finish

Here’s the code generated


public partial class CddbContext : DbContext
   public CddbContext()
      : base("name=CddbContext")

   public virtual DbSet<album> albums { get; set; }
   public virtual DbSet<artist> artists { get; set; }

   protected override void OnModelCreating(DbModelBuilder modelBuilder)
         .HasMany(e => e.albums)
         .WithRequired(e => e.artist)


public partial class artist
   public artist()
      albums = new HashSet<album>();

   public int Id { get; set; }

   public string Name { get; set; }

   public virtual ICollection<album> albums { get; set; }


public partial class album
   public int Id { get; set; }

   public string Title { get; set; }

   public int ArtistId { get; set; }

   public virtual artist artist { get; set; }

finally let’s create a simple but of code to get the artists from the database, so in Main we have

using (var db = new CddbContext())
   var artists = db.artists;
   foreach (var a in artists)

If your database schema changes you will need to re-run the steps to generate your data context etc. or code by hand. There isn’t (currently) a way to update existing classes – so don’t make changes to the generated code and expect it to still exist after regeneration.

SQL Server Database Project

I must admit most (if not all) my SQL Server interaction takes place in SQL Server Management Studio, but I wanted to create a new database project using the Visual Studio database tools, so I thought I’d give this a go…

Getting Started

I always like to start such posts off with a set of steps for getting the basics up and running, so let’s continue with that way of doing things.

  • Create a new SQL Server | SQL Server Database Project (mine’s called cddb)
  • Select the project in solution explorer and if need be, open the project properties/settings and set the Target platform etc.
  • Right mouse click on the project and select Add | Table…
  • Name the table artist
  • Repeat the last two steps but name the table album

So at this point we have a database project and two tables/sql scripts with nothing much in them.

We’re going to create some very basic tables, as this post isn’t mean’t to be too focused on data but more using these tools.

So for artist.sql we should have

CREATE TABLE [dbo].[artist]

and for album.sql we should have

CREATE TABLE [dbo].[album]
[ArtistId] INT NOT NULL, 
CONSTRAINT [FK_album_Toartist] 
  FOREIGN KEY (ArtistId) 
  REFERENCES [artist]([Id])

Deploy/Publish your database

At this point, let’s actually publish our database to an instance of SQL Server or SQL Server Express.

Right mouse click on the project and select Publish, you should have the Database name supplied as cddb and the script as cddb.sql. Click the Edit button and enter the connect details for the instance of SQL Server. Finally click on the generate script button if you wish to create DB script and then run this yourself or click the Publish button to automatically publish your tables to the SQL Sever instance.

In the Data Tools Operations view you’ll see the process of publishing and creating the database scripts. Once successfully completed you should now have the cddb database running in SQL Server.

Let’s add some data

In a continuous integration and/or continuous deployment scenario, it’s useful to recreate our database from scripts, so generating the script instead of publishing to the database obviously helps in this, but it’s also useful to generate some data. Ofcourse it could be we’re populating the data from another instance of the DB but for this example we’re going to add some data via an SQL script.

Right mouse click on the database project and select Add | Script… We’re going to create a post-deployment script. As the name suggests this should be run after the DB is generated. I’ve named my script populate.sql, you’ll notice in the Visual Studio properties window the Advanced | Build Action will show PostDeploy.

We’re going to use the T-SQL Merge statement to create our test data, this script is as follows


MERGE artist AS target
   (1, N'Alice Cooper'),
   (2, N'Van Halen'),
   (3, N'Deep Purple')
AS source (Id, Name)
ON target.Id = source.Id
   UPDATE SET Name = source.Name
   INSERT (Id, Name)
   VALUES (Id, Name)



MERGE album AS target
   (1, N'Lace and Whiskey', 1),
   (2, N'I', 1),
   (3, N'III', 1),
   (4, N'Burn', 2)
AS source (Id, Title, ArtistId)
ON target.Id = source.Id
   UPDATE SET Title = source.Title, ArtistId = source.ArtistId
   INSERT (Id, Title, ArtistId)
   VALUES (Id, Title, ArtistId)


Ofcourse the above would be somewhat unwieldy if we’re populating hundreds of hundreds or MB of data entries.

Populating data from CSV

One possible solution for populating a larger number of records might be to use one or more CSV files to contain our seed data. So let’s assume we have the following files


1,Alice Cooper
2,Van Halen
3,Deep Purple

and albums.csv

1,Lace and Whiskey,1

we could now replace our post deployment code with the following

   FROM 'artists.csv'


   FROM 'albums.csv'


Importing data using SQL Server Management Studio

Whilst this doesn’t fit in with the context of this post, i.e. it’s not automated. You could ofcourse create the database and use SQL Server Management Studio’s Import task to import data into your database.

Simply select the database you want to import data into, right mouse click on this and select Tasks | Import Data and work through the wizard to import your data from a variety of sources.

Automating Excel (some basics)

Here’s some basic for automating Excel from C#.

Make sure you dereference your Excel COM objects

Actually I’m going to start with a word of caution. When interacting with Excel you need to ensure that you dereference any Excel objects after use or you’ll find Excel remains in memory when you probably thought it had been closed.

To correctly deal with Excel’s COM objects the best thing to do is store each object in a variable and when you’ve finished with it, make sure you set that variable to null. Accessing some Excel objects using simply dot notation such as


will result in COM objects being created but without your application having a reference to them they’ll remain referenced long after you expect.

Instead do things like

var workbooks = application.Workbooks[0];
var workSheet = workbooks.Sheets[1];

If in doubt, check via Task Manager to see if your instance of Excel has been closed.

Starting Excel

var application = new Excel.Application();
var workbook = application.Workbooks.Add(Excel.XlWBATemplate.xlWBATWorksheet);
Excel.Worksheet worksheet = workbook.Sheets[1];

application.Visible = true;

Setting Cell data

worksheet.Cells[row, column++] = 
    cell.Value != null ? 
       cell.Value.ToString() : 

Grouping a range

Excel.Range range = worksheet.Rows[String.Format("{0}:{1}", row, row + children)];
range.OutlineLevel = indent;
range.Group(Missing.Value, Missing.Value, Missing.Value, Missing.Value);

Change the background colour

worksheet.Rows[row].Interior.Color = Excel.XlRgbColor.rgbRed;

Change the background colour from a Color object

We can use the built-in colour conversion code, which from WPF would mean converting to a System.Drawing.Color, as per this

																			System.Drawing.Color clr = System.Drawing.Color.FromArgb(solid.Color.A, solid.Color.R, solid.Color.G, solid.Color.B);

Now we can use this as follows

worksheet.Rows[row].Interior.Color = ColorTranslator.ToOle(clr);

or we can do this ourselves using

int clr = solid.Color.R | solid.Color.G << 8 | solid.Color.B << 16;									worksheet.Rows[row].Interior.Color = clr;

Changing the foreground colour

int clr = solid.Color.R | solid.Color.G << 8 | solid.Color.B << 16;									worksheet.Rows[row].Font.Color = clr;


Using Rx to read from UI and write on a worker thread

I have a problem whereby I need to iterate over a potentially large number of rows in a UI grid control, the iteration needs to take place on the UI thread but the writing (which in this instance write the data to Excel) can occur on a background thread which should make the UI a little more responsive.

Now this might not be the best solution but it seems to work better than other more synchronous solutions. One thing to note is that this current design expects the call to the method to be on the UI thread and hence doesn’t marshal the call to the grid control onto the UI thread (it assumes it’s on it).

Within the UI iteration method I create a new observable using

var rowObservable = Observable.Create<string>(observer =>
   // iterate over grid records calling the OnNext method 
   // for example
   foreach(var cell in Cells)

   return Disposable.Empty;

In the above code we loop through the cells of our UI grid control and then place each value onto the observer using OnNext. When the process completes we then call the OnCompleted method of the observer to tell any subscribers that the process is finished.

Let’s look at the subscriber code

var tsk = new TaskCompletionSource<object>();

   Subscribe(r =>
      foreach (var item in r)
          // write each value to Excel (in this example)
   }, () =>

return tsk.Task;

In the above code we buffer pushed items from rowObserver so we only process every 20 items. We ObserveOn the default scheduler, so this will be a background thread (i.e. threadpool) but we SubscribeOn the current synchronization context – remember I mentioned this method should be called from the UI thread and hence this subscribes on the UI thread. When the rowObserver completes we’ll still write out the last few items (if they’re less than the buffer size).

You’ll notice we use a puppet task, controlling a TaskCompletionSource and on completion of the rowObserver we set the result on the puppet task, thus allowing our method to be used in async/await scenarios.

Like I mentioned, this actually might not be the best solution to the original problem, but it was interesting getting it up and running.