SignalR 2

Well it’s about time I revisited SignalR. I’m working on a little side project to act as a intranet dashboard for one of the applications I support.

The idea is to produce those commonly asked for pieces of information about the current state of the application, infrastructure etc. into a usable web page.

So first up I want a way to update the dashboard when changes are detected (or found when polling at specific periods) to the dashboard. As I have written something a little like this in the past with SignalR I thought it’d be good to see where the technology stood now.

So we’re going to create a bare minimum ASP.NET MVC application with SignalR periodically causing updates to the page – ofcourse there’s plenty of chat samples for this, so if you’re looking for something a little more in depth, please go and check those out.

Getting started

I’m using Visual Studio 2015 and will be creating an ASP.NET MVC5 web application for this (ofcourse you don’t need to got full ASP.NET MVC for this, but I want to integrate this into my MVC app).

  • Create a new project in VS2015, select Visual C#/Web/ASP.NET Application (available as a .NET 4.5 or above template) – my project is called SignalRTest
  • After you press OK, select MVC from the next screen and press OK
  • Add a new folder named Hubs (not required but keeps the code partitioned nicely)
  • Right mouse click on the Hubs folder and select Add | New Item
  • Select Visual C#/Web/SignalR
  • Select SignalR Hub Class (v2) and give it a meaningful name, mine is ServerStatusHub
  • If you have a Startup.cs file (this will be supplied if you have authorization enabled), then simple add the following line to the Configuration method
    app.MapSignalR();
    
  • If you chose no authentication then you’ll need to create a Startup.cs file in the root folder (alongside the Web.Config in your solution), it should look like this
    using Microsoft.Owin;
    using Owin;
    
    [assembly: OwinStartupAttribute(typeof(SignalRTest.Startup))]
    namespace SignalRTest
    {
        public partial class Startup
        {
            public void Configuration(IAppBuilder app)
            {
                app.MapSignalR();
            }
        }
    }
    

Let’s pause and take stock, we now have a Hub derived object named ServerStatusHub which will be callable from JavaScript (in the application we’re writing) and will equally be able to call out to client JavaScript code as and when server updates come in.

We’re going to change the code in the hub to this

using System;
using System.Threading;
using Microsoft.AspNet.SignalR;

namespace SignalRTest.Hubs
{
    public class ServerStatusHub : Hub
    {
        private readonly Timer timer;

        public ServerStatusHub()
        {
            timer = new Timer(state => Refresh());
            timer.Change(TimeSpan.FromSeconds(10), 
               TimeSpan.FromSeconds(30));
        }

        public void Refresh()
        {
            Clients.All.refresh(DateTime.Now.ToString());
        }
   }
}

Note: this is not great code as the timer will just keep going until the application closes.

So this will both simulate the equivalent of events coming from some external trigger (in this case the timer) which will be received and processed on the web application (client) and it also allows the code to be called as a server to initiate a refresh of the status, i.e. via a button click (for example).

Open the Index.cshtml file in the Views/Home folder of the solution and remove all the divs leaving the following

@{
    ViewBag.Title = "Home Page";
}

Now add to the Index.cshtml, the following

<div>
    <div id="serverStatus"></div>
</div>
<button id="serverRefresh">Refresh</button>

@section scripts {
    <script src="~/Scripts/jquery.signalR-2.1.2.min.js"></script>
    <script src="~/signalr/hubs"></script>
    <script>
        $(function() {

            try {
                var serverStatus = $.connection.serverStatusHub;

                serverStatus.client.refresh = function(status) {
                    $('#serverStatus').html(htmlEncode(status));
                };

                $('#serverRefresh')
                    .click(function() {
                        serverStatus.server.refresh();
                    });

            } catch (sourceError) {
                $('#serverStatus').html(htmlEncode(sourceError.message));
            }

            $.connection.hub.start()
                .done(function() {
                })
                .fail(function() {
                    $('#serverStatus').html('Failed to start server hub');
                });
        });

        function htmlEncode(value) {
            return $('<div />').text(value).html();
        }
    </script>
}

Don’t worry about the blue squiggle line say thar ~/signalr/hibs could not be found, the proxies will created when the application runs. If you do want to see what’s created you can run the application and navigate to the folder (i.e. http://localhost:56433/signalr/hubs) and see the proxies.

So we’re creating the SignalR JavaScript code against the proxies and hence have an object named serverStatusHub.

Whilst the methods and types are Pascal case in the C# code we use Camel case for the JavaScript.

The code above simply creates a connection to the server status hub and then we create a client method (equivalent to a callback) where we’ll recieve updates from the hub as they come in. We’ll simply output these to the HTML page.

We also hook up to the button serverRefresh so the user can call the hub to get the latest status of the servers in our imaginary application. The rest of this section of code is error handling code, but it’s following (after the catch block) with the code to connect to the hub and start the SignalR hub up.

And that’s all there is to it.

Returning to Entity Framework database first

After working with a database project in Visual Studio, I thought it was probably time to create a simple console application to interact with the database using the current version of Entity Framework (v6.0).

So as we’ve already created the cddb database in a previous post, we’ll simply create a new console project and work with that DB.

  • Create your application, as stated mine is a console application
  • Add new item and select Data | ADO.NET Entity Data Model, mine’s named CddbContext (as this will include the source for the data context created for EF)
  • Select Code First from database
  • Create a new connection and supply the relevant details for your database connection
  • Press next then select the tables (and views) you want to generate code for – then click Finish

Here’s the code generated

CddbContext.cs

public partial class CddbContext : DbContext
{
   public CddbContext()
      : base("name=CddbContext")
   {
   }

   public virtual DbSet<album> albums { get; set; }
   public virtual DbSet<artist> artists { get; set; }

   protected override void OnModelCreating(DbModelBuilder modelBuilder)
   {
      modelBuilder.Entity<artist>()
         .HasMany(e => e.albums)
         .WithRequired(e => e.artist)
         .WillCascadeOnDelete(false);
   }
}

artist.cs

[Table("artist")]
public partial class artist
{
   [System.Diagnostics.CodeAnalysis.SuppressMessage(
    "Microsoft.Usage", 
    "CA2214:DoNotCallOverridableMethodsInConstructors")]
   public artist()
   {
      albums = new HashSet<album>();
   }

   public int Id { get; set; }

   [Required]
   [StringLength(50)]
   public string Name { get; set; }

   [System.Diagnostics.CodeAnalysis.SuppressMessage(
    "Microsoft.Usage", 
    "CA2227:CollectionPropertiesShouldBeReadOnly")]
   public virtual ICollection<album> albums { get; set; }
}

album.cs

[Table("album")]
public partial class album
{
   public int Id { get; set; }

   [Required]
   [StringLength(50)]
   public string Title { get; set; }

   public int ArtistId { get; set; }

   public virtual artist artist { get; set; }
}

finally let’s create a simple but of code to get the artists from the database, so in Main we have

using (var db = new CddbContext())
{
   var artists = db.artists;
   foreach (var a in artists)
   {
      Console.WriteLine(a.Name);
   }
}

If your database schema changes you will need to re-run the steps to generate your data context etc. or code by hand. There isn’t (currently) a way to update existing classes – so don’t make changes to the generated code and expect it to still exist after regeneration.

SQL Server Database Project

I must admit most (if not all) my SQL Server interaction takes place in SQL Server Management Studio, but I wanted to create a new database project using the Visual Studio database tools, so I thought I’d give this a go…

Getting Started

I always like to start such posts off with a set of steps for getting the basics up and running, so let’s continue with that way of doing things.

  • Create a new SQL Server | SQL Server Database Project (mine’s called cddb)
  • Select the project in solution explorer and if need be, open the project properties/settings and set the Target platform etc.
  • Right mouse click on the project and select Add | Table…
  • Name the table artist
  • Repeat the last two steps but name the table album

So at this point we have a database project and two tables/sql scripts with nothing much in them.

We’re going to create some very basic tables, as this post isn’t mean’t to be too focused on data but more using these tools.

So for artist.sql we should have

CREATE TABLE [dbo].[artist]
(
[Id] INT NOT NULL PRIMARY KEY IDENTITY, 
[Name] NVARCHAR(50) NOT NULL
)

and for album.sql we should have

CREATE TABLE [dbo].[album]
(
[Id] INT NOT NULL PRIMARY KEY IDENTITY, 
[Title] NVARCHAR(50) NOT NULL, 
[ArtistId] INT NOT NULL, 
CONSTRAINT [FK_album_Toartist] 
  FOREIGN KEY (ArtistId) 
  REFERENCES [artist]([Id])
)

Deploy/Publish your database

At this point, let’s actually publish our database to an instance of SQL Server or SQL Server Express.

Right mouse click on the project and select Publish, you should have the Database name supplied as cddb and the script as cddb.sql. Click the Edit button and enter the connect details for the instance of SQL Server. Finally click on the generate script button if you wish to create DB script and then run this yourself or click the Publish button to automatically publish your tables to the SQL Sever instance.

In the Data Tools Operations view you’ll see the process of publishing and creating the database scripts. Once successfully completed you should now have the cddb database running in SQL Server.

Let’s add some data

In a continuous integration and/or continuous deployment scenario, it’s useful to recreate our database from scripts, so generating the script instead of publishing to the database obviously helps in this, but it’s also useful to generate some data. Ofcourse it could be we’re populating the data from another instance of the DB but for this example we’re going to add some data via an SQL script.

Right mouse click on the database project and select Add | Script… We’re going to create a post-deployment script. As the name suggests this should be run after the DB is generated. I’ve named my script populate.sql, you’ll notice in the Visual Studio properties window the Advanced | Build Action will show PostDeploy.

We’re going to use the T-SQL Merge statement to create our test data, this script is as follows

SET IDENTITY_INSERT artist ON
GO

MERGE artist AS target
USING (VALUES
   (1, N'Alice Cooper'),
   (2, N'Van Halen'),
   (3, N'Deep Purple')
)
AS source (Id, Name)
ON target.Id = source.Id
WHEN MATCHED THEN
   UPDATE SET Name = source.Name
WHEN NOT MATCHED BY target THEN
   INSERT (Id, Name)
   VALUES (Id, Name)
WHEN NOT MATCHED BY source THEN
   DELETE;
GO

SET IDENTITY_INSERT artist OFF
GO

SET IDENTITY_INSERT album ON
GO

MERGE album AS target
USING (VALUES
   (1, N'Lace and Whiskey', 1),
   (2, N'I', 1),
   (3, N'III', 1),
   (4, N'Burn', 2)
)
AS source (Id, Title, ArtistId)
ON target.Id = source.Id
WHEN MATCHED THEN
   UPDATE SET Title = source.Title, ArtistId = source.ArtistId
WHEN NOT MATCHED BY target THEN
   INSERT (Id, Title, ArtistId)
   VALUES (Id, Title, ArtistId)
WHEN NOT MATCHED BY source THEN
   DELETE;
GO

SET IDENTITY_INSERT album OFF
GO

Ofcourse the above would be somewhat unwieldy if we’re populating hundreds of hundreds or MB of data entries.

Populating data from CSV

One possible solution for populating a larger number of records might be to use one or more CSV files to contain our seed data. So let’s assume we have the following files

artists.csv

1,Alice Cooper
2,Van Halen
3,Deep Purple

and albums.csv

1,Lace and Whiskey,1
2,I,1
3,III,1
4,Burn,2

we could now replace our post deployment code with the following

BULK INSERT artist
   FROM 'artists.csv'
   WITH
   (
   FIRSTROW=1,
   FIELDTERMINATOR=',',
   ROWTERMINATOR='\n',
   TABLOCK
   )

GO

BULK INSERT album
   FROM 'albums.csv'
   WITH
   (
   FIRSTROW=1,
   FIELDTERMINATOR=',',
   ROWTERMINATOR='\n',
   TABLOCK
   )

GO

Importing data using SQL Server Management Studio

Whilst this doesn’t fit in with the context of this post, i.e. it’s not automated. You could ofcourse create the database and use SQL Server Management Studio’s Import task to import data into your database.

Simply select the database you want to import data into, right mouse click on this and select Tasks | Import Data and work through the wizard to import your data from a variety of sources.

Automating Excel (some basics)

Here’s some basic for automating Excel from C#.

Make sure you dereference your Excel COM objects

Actually I’m going to start with a word of caution. When interacting with Excel you need to ensure that you dereference any Excel objects after use or you’ll find Excel remains in memory when you probably thought it had been closed.

To correctly deal with Excel’s COM objects the best thing to do is store each object in a variable and when you’ve finished with it, make sure you set that variable to null. Accessing some Excel objects using simply dot notation such as

application.Workbooks[0].Sheets[1];

will result in COM objects being created but without your application having a reference to them they’ll remain referenced long after you expect.

Instead do things like

var workbooks = application.Workbooks[0];
var workSheet = workbooks.Sheets[1];

If in doubt, check via Task Manager to see if your instance of Excel has been closed.

Starting Excel

var application = new Excel.Application();
var workbook = application.Workbooks.Add(Excel.XlWBATemplate.xlWBATWorksheet);
Excel.Worksheet worksheet = workbook.Sheets[1];

application.Visible = true;

Setting Cell data

worksheet.Cells[row, column++] = 
    cell.Value != null ? 
       cell.Value.ToString() : 
       String.Empty;

Grouping a range

Excel.Range range = worksheet.Rows[String.Format("{0}:{1}", row, row + children)];
range.OutlineLevel = indent;
range.Group(Missing.Value, Missing.Value, Missing.Value, Missing.Value);

Change the background colour

worksheet.Rows[row].Interior.Color = Excel.XlRgbColor.rgbRed;

Change the background colour from a Color object

We can use the built-in colour conversion code, which from WPF would mean converting to a System.Drawing.Color, as per this

																			System.Drawing.Color clr = System.Drawing.Color.FromArgb(solid.Color.A, solid.Color.R, solid.Color.G, solid.Color.B);

Now we can use this as follows

worksheet.Rows[row].Interior.Color = ColorTranslator.ToOle(clr);

or we can do this ourselves using

int clr = solid.Color.R | solid.Color.G << 8 | solid.Color.B << 16;									worksheet.Rows[row].Interior.Color = clr;

Changing the foreground colour

int clr = solid.Color.R | solid.Color.G << 8 | solid.Color.B << 16;									worksheet.Rows[row].Font.Color = clr;

References

https://msdn.microsoft.com/en-us/library/microsoft.office.interop.excel.aspx

Using Rx to read from UI and write on a worker thread

I have a problem whereby I need to iterate over a potentially large number of rows in a UI grid control, the iteration needs to take place on the UI thread but the writing (which in this instance write the data to Excel) can occur on a background thread which should make the UI a little more responsive.

Now this might not be the best solution but it seems to work better than other more synchronous solutions. One thing to note is that this current design expects the call to the method to be on the UI thread and hence doesn’t marshal the call to the grid control onto the UI thread (it assumes it’s on it).

Within the UI iteration method I create a new observable using

var rowObservable = Observable.Create<string>(observer =>
{
   // iterate over grid records calling the OnNext method 
   // for example
   foreach(var cell in Cells)
   {
      observer.OnNext(cell.Value);
   }

   observer.OnCompleted();
   return Disposable.Empty;
});

In the above code we loop through the cells of our UI grid control and then place each value onto the observer using OnNext. When the process completes we then call the OnCompleted method of the observer to tell any subscribers that the process is finished.

Let’s look at the subscriber code

var tsk = new TaskCompletionSource<object>();

rowObservable.Buffer(20).
   SubscribeOn(SynchronizationContext.Current).
   ObserveOn(Scheduler.Default).
   Subscribe(r =>
   {
      foreach (var item in r)
      {
          // write each value to Excel (in this example)
      }
   }, () =>
   {
      tsk.SetResult(null);
   });

return tsk.Task;

In the above code we buffer pushed items from rowObserver so we only process every 20 items. We ObserveOn the default scheduler, so this will be a background thread (i.e. threadpool) but we SubscribeOn the current synchronization context – remember I mentioned this method should be called from the UI thread and hence the SubscribeOn (not ObserveOn) relates to the code we’re observing and this is on the UI thread. When the rowObserver completes we’ll still write out the last few items (if they’re less than the buffer size).

Note: It’s important to remember that SubscribeOn schedules the observable (in this case the UI code) not the code within the Subscribe method. This is scheduled using the ObserveOn method.

You’ll notice we use a puppet task, controlling a TaskCompletionSource and on completion of the rowObserver we set the result on the puppet task, thus allowing our method to be used in async/await scenarios.

Like I mentioned, this actually might not be the best solution to the original problem, but it was interesting getting it up and running.

Structured logging with the Semantic Logging Block

In another post I looked at Serilog and what structured logging capabilities can bring to an application, here I’m going to investigate the Patterns & Practices Semantic Logging Application Block.

So we’re looking at a means of logging more than just a “simple” string representing our state or failure (or whatever) from our application. Most likely we’re wanting to output log entries which can be analysed later in a more programmatic manner, i.e. querying or grouping log data.

Getting started

Let’s just get some simple code up and running to see how things fit together.

  • Create a console application
  • Using Nuget, add the following EnterpriseLibrary.SemanticLogging package
  • Add a new class named whatever you want, mine’s called MyEventSource

The MyEventSource class looks like this

[EventSource(Name = "MyEventSource")]
public class MyEventSource : EventSource
{
   public static MyEventSource Log { get; } = new MyEventSource();

   [Event(1, Message = "Application Failure: {0}", 
    Level = EventLevel.Informational,
    Keywords = EventKeywords.None)]
   public void Information(string message)
   {
      WriteEvent(1, message);
   }
}

Next up, let’s implement some simple logging code in our Main method

var eventSource = MyEventSource.Log;
var listener = ConsoleLog.CreateListener(
   new JsonEventTextFormatter(EventTextFormatting.Indented));

listener.EnableEvents(eventSource, 
   EventLevel.LogAlways, 
   EventKeywords.All);

eventSource.Information("Application Started");

// do something worthwhile 

eventSource.Information("Existing Application");

In the example code we’re logging to the console and using the JsonEventTextFormatter, so the output looks like this

{
"ProviderId": "8983a2e6-c5d2-5a1f-691f-db243cb1f681",
"EventId": 1,
"Keywords": 0,
"Level": 4,
"Message": "Application Failure: Application Started",
"Opcode": 0,
"Task": 65533,
"Version": 0,
"Payload": {
"message": "Application Started"
},
"EventName": "InformationInfo",
"Timestamp": "2016-07-08T10:19:22.8698814Z",
"ProcessId": 20136,
"ThreadId": 19128
},
{
"ProviderId": "8983a2e6-c5d2-5a1f-691f-db243cb1f681",
"EventId": 1,
"Keywords": 0,
"Level": 4,
"Message": "Application Failure: Existing Application",
"Opcode": 0,
"Task": 65533,
"Version": 0,
"Payload": {
"message": "Existing Application"
},
"EventName": "InformationInfo",
"Timestamp": "2016-07-08T10:19:22.9648909Z",
"ProcessId": 20136,
"ThreadId": 19128
},

Let’s now add a rolling file listener to our Main method

var rollingFileListener =
   RollingFlatFileLog.CreateListener(
      "logs\\semantic.txt", 1073741824,
      "yyyy.MM.dd",
      RollFileExistsBehavior.Increment, 
      RollInterval.Day,
      new JsonEventTextFormatter(EventTextFormatting.Indented));

rollingFileListener.EnableEvents(
   eventSource, 
   EventLevel.LogAlways, 
   EventKeywords.All);

So we simply attach another listener to our event source and now we are logging to both the console and a file (ofcourse in a non-sample application we would not be creating multiple JsoEventTextFormatters etc. but you get the idea).

That’s basically it – we’re up and running.

Returning values (in sequence) using JustMock Lite

InSequence

Okay, so I have some code which is of the format

do 
{
   while(reader.Read())
   {
      // do something 
   }
} while (reader.ReadNextPage())

the basic premise is Read some data from somewhere until the data is exhausted, then read the next page of data and so on, until no data is left to read.

I wanted to unit test aspects of this by mocking out the reader and allowing me to isolate the specific functionality within the method. Ofcourse I could have refactored this method to test just the inner parts of the loop, but this is not always desirable as it still means the looping expectation is not unit tested.

I can easily mock the ReadNextPage to return false to just test one pages of data, but the Read method itself needs to return true initially, but also must return false at some point or the unit test will potentially get stuck in an infinite loop. Hence, I need to be able to eventually return false on the Read method.

Using InSequence, we can return different values on the calls to the Read method, for example using

Mock.Arrange(() => reader.ReadNextPage()).Returns(false);
Mock.Arrange(() => reader.Read()).Returns(true).InSequence();
Mock.Arrange(() => reader.Read()).Returns(false).InSequence();

Here the first call to Read obviously returns true, the next call returns false, so the unit test will actually complete and we’ll successfully test the loop and whatever is within it.

Creating a TopShelf application

What is Topshelf?

Topshelf is basically a framework for creating a Windows service (it also supports Mono on Linux – although I’ve not tested this yet).

You might still ask “why do we need this, writing a Windows service isn’t difficult?”

Well a few years back I wrote a similar library, my reasoning was that I wanted to write an application which could be run as both a console application and/or as a Windows service – well Topshelf gives you this capability and more. Ofcourse, it’s also far simpler to debug a console application than a Window’s service.

Getting started

We can write our service in such a way as to have a dependency upon Topshelf or no dependency in which case we use Topshelf methods to redirect Topshelf methods to our code – let’s start by looking at this route (i.e. a non-dependency version of the service).

Topshelf has a perfectly good sample for getting started on their site’s Show me the code page.

I’m going to pretty much recreate it here, but do check out their page.

  • Create a console application (mine’s called MyTickerService)
  • Add a reference to Topshelf via Nuget
  • Create a new class called TickerService

TickerService looks like this

public class TickerService
{
   private readonly Timer timer;

   public TickerService()
   {
      timer = new Timer(5000) {AutoReset = true};
      timer.Elapsed += (sender, e) =>
      {
         Console.WriteLine("Tick:" + DateTime.Now.ToLongTimeString());
      };
   }
 
   public void Start()
   {
      Console.WriteLine("Service started");
      timer.Start();
   }

   public void Stop()
   {
      Console.WriteLine("Service stopped");
      timer.Stop();
   }
}

This is simple enough, we’re just going to keep writing out to the Console on each elaspe of the timer. The Start and Stop methods are there to just allow us to intercept the service start and then the top to allow us to set-up and clean-up anything on our service. The names can be anything we like at this point, as these will ultimately be called from the Topshelf framework methods when we write that code, which we shall do now…

We now need to edit the Program.cs file Main method. Here’s the code

static void Main(string[] args)
{
   HostFactory.Run(x =>
   {
      x.Service<TickerService>(s =>
      {
         s.ConstructUsing(name => new TickerService());
         s.WhenStarted(svc => svc.Start());
         s.WhenStopped(svc => svc.Stop());
      });

      x.SetServiceName("SampleService");
      x.SetDisplayName("Sample");
      x.SetDescription("Sample");
      x.RunAsLocalSystem();
   });
}

Now the Topshelf fluent interface allows us to run the service and as can be seen, when the service starts or stops we simply call any appropriate methods on the TickerService. We then go on to set the service name and the service’s display name and a description before the call to RunAsLocalSystem. We can also run as a specific user/password using RunAs, require a prompt for the same using RunAsPrompt and more (see Topshelf Configuration for more information).

If you now run this application via Visual Studio or from the command prompt, you’ll see the application output text on each elapsed tick. So we can now use our “service” as a standalone console application and debug it easily like this. Then if we want to use the application in a Windows service scenario we can simply run a command prompt as administrator and then use any/all of the following to install the service (once installed it will be visible in services.msc), start the service manually, stop it and uninstall it.

MyTickerService.exe install
MyTickerService.exe start
MyTickerService.exe stop
MyTickerService.exe uninstall

Alternate implementation

The sample code previously showed a way to take a standard C# class and using the Topshelf methods, turn it into a simple service. The class itself had no dependency on Topshelf, but we could have gone a different route and implemented the ServiceControl interface from Topshelf, giving us a TickerService which looks like this

public class TickerService : ServiceControl
{
   private readonly Timer timer;

   public TickerService()
   {
      timer = new Timer(5000) {AutoReset = true};
      timer.Elapsed += (sender, e) =>
      {
         Console.WriteLine("Tick:" + DateTime.Now.ToLongTimeString());
      };
   }

   public bool Start(HostControl hostControl)
   {
      Console.WriteLine("Service started");

      timer.Start();
      return true;
   }

   public bool Stop(HostControl hostControl)
   {
      Console.WriteLine("Service stopped");

      timer.Stop();
      return true;
   }
}

So the only changes here are that we implement the ServiceControl interface and with that implement the Start and Stop methods, meaning we no longer need to tell Topshelf which methods to run when the applications starts and stops, so our Program.cs Main method can now be reduced to

HostFactory.Run(x =>
{
   x.Service<TickerService>();

   x.SetServiceName("SampleService");
   x.SetDisplayName("Sample");
   x.SetDescription("Sample");
   x.RunAsLocalSystem();
});

As our TickerService constructor takes no arguments we can simply use the x.Service() call to create it.

Logging

By default Topshelf will log using TraceSource, but if you want to use one of the alternate, more advanced logging libraries such as NLog or log4net etc. Then thankfully Topshelf allows us to easily integrate these.

I’m going to use NLog. So from Nuget find a version of Topshelf.NLog comaptible with the version of Topshelf you’re using and add this package to your project.

Let’s add a fairly generic NLog configuration to the App.config

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <configSections>
    <section name="nlog" type="NLog.Config.ConfigSectionHandler, NLog"/>
  </configSections>
  <nlog>
    <targets>
      <target name="console" type="Console" layout="${message}" />
      <target name="debugger" type="Debugger" layout="${message}"/>
    </targets>
    <rules>
      <logger name="*" minlevel="Debug" writeTo="console,debugger" />
    </rules>
  </nlog>
</configuration>

Now we’ll declare the following variable in the TickerService class

private readonly LogWriter logWriter;

within the TickerService constructor, assign the log writer thus

logWriter = HostLogger.Get<TickerService>();

and finally, in place of the Console.WriteLine in the TickerService we use logWriter.Debug for example, here’s the Elasped event

logWriter.Debug("Tick:" + DateTime.Now.ToLongTimeString());

Structured logging with Serilog

Having used several .NET logging frameworks over the years, including the Logging Application Block, log4net, NLog as well as having used various logging frameworks in other languages.

In most cases the output to a log file is a formatted string, but in the past I’ve wondered about creating my logs in a way that could be queried or grouped in a more effective way. Recently this has become more relevant with a work project where we’re starting to use the ELK stack (Elasticsearch, Logstash & Kibana).

So I decided to take a look at some of the structured logging frameworks out there for .NET.

Getting started

I’m using Serilog 2.0, so obviously things may differ from his post with the version you’re using.

Let’s create a very simply console project to try this out on

  • Create a Console application
  • Using NuGet add Serilog
  • Using NuGet add Serilog.Sinks.RollingFile
  • Using NuGet add Serilog.Sinks.Literate

Now, we’ll jump straight into the code for this example

static void Main(string[] args)
{
   Log.Logger = new LoggerConfiguration()
      .WriteTo.LiterateConsole()
      .WriteTo.RollingFile("logs\\sample-{Date}.txt")
      .MinimumLevel.Verbose()
      .CreateLogger();

   Log.Logger.Information("Application Started");

   for (int i = 0; i < 10; i++)
   {
      Log.Logger.Information("Iteration {I}", i);
   }

   Log.Logger.Information("Exiting Application");
}

In this example, we use Serilog’s fluent interface to create our configuration. Whilst Serilog can be configured via a configuration file it appears it’s positioned more towards code based configuration.

Using the WriteTo property we can set-up one or more output mechanisms. In this case I’m using a console (LiterateConsole outputs nice coloured logs to the console), a RollingFile is created in the application’s folder off of the logs folder.

By default, Serilog sets it’s minimum logging level to Information, I’ve effectively told it to log everything by setting it to Verbose in the above code.

If we run this and view the output and/or log file, we’ll find the following output

2016-07-07 15:34:07.864 +01:00 [Information] Application Started
2016-07-07 15:34:07.906 +01:00 [Information] Iteration 0
2016-07-07 15:34:07.908 +01:00 [Information] Iteration 1
2016-07-07 15:34:07.908 +01:00 [Information] Iteration 2
2016-07-07 15:34:07.909 +01:00 [Information] Iteration 3
2016-07-07 15:34:07.910 +01:00 [Information] Iteration 4
2016-07-07 15:34:07.911 +01:00 [Information] Iteration 5
2016-07-07 15:34:07.911 +01:00 [Information] Iteration 6
2016-07-07 15:34:07.912 +01:00 [Information] Iteration 7
2016-07-07 15:34:07.912 +01:00 [Information] Iteration 8
2016-07-07 15:34:07.913 +01:00 [Information] Iteration 9
2016-07-07 15:34:07.913 +01:00 [Information] Exiting Application

As our purpose was to get queryable output, this doesn’t really help a lot, what we really want is a more structured format of output, like XML or JSON.

Let’s replace the .WriteTo.RollingFile line with the following

.WriteTo.Sink(new
   RollingFileSink("logs\\sample-{Date}.txt",
      new JsonFormatter(renderMessage: true), 1073741824, 31))

Now our log file output looks like this

{"Timestamp":"2016-07-07T15:42:35.2095535+01:00","Level":"Information","MessageTemplate":"Application Started","RenderedMessage":"Application Started"}
{"Timestamp":"2016-07-07T15:42:35.2695595+01:00","Level":"Information","MessageTemplate":"Iteration {I}","RenderedMessage":"Iteration 0","Properties":{"I":0}}
{"Timestamp":"2016-07-07T15:42:35.2725598+01:00","Level":"Information","MessageTemplate":"Iteration {I}","RenderedMessage":"Iteration 1","Properties":{"I":1}}
{"Timestamp":"2016-07-07T15:42:35.2735599+01:00","Level":"Information","MessageTemplate":"Iteration {I}","RenderedMessage":"Iteration 2","Properties":{"I":2}}
{"Timestamp":"2016-07-07T15:42:35.2735599+01:00","Level":"Information","MessageTemplate":"Iteration {I}","RenderedMessage":"Iteration 3","Properties":{"I":3}}
{"Timestamp":"2016-07-07T15:42:35.2745600+01:00","Level":"Information","MessageTemplate":"Iteration {I}","RenderedMessage":"Iteration 4","Properties":{"I":4}}
{"Timestamp":"2016-07-07T15:42:35.2745600+01:00","Level":"Information","MessageTemplate":"Iteration {I}","RenderedMessage":"Iteration 5","Properties":{"I":5}}
{"Timestamp":"2016-07-07T15:42:35.2755601+01:00","Level":"Information","MessageTemplate":"Iteration {I}","RenderedMessage":"Iteration 6","Properties":{"I":6}}
{"Timestamp":"2016-07-07T15:42:35.2755601+01:00","Level":"Information","MessageTemplate":"Iteration {I}","RenderedMessage":"Iteration 7","Properties":{"I":7}}
{"Timestamp":"2016-07-07T15:42:35.2765602+01:00","Level":"Information","MessageTemplate":"Iteration {I}","RenderedMessage":"Iteration 8","Properties":{"I":8}}
{"Timestamp":"2016-07-07T15:42:35.2765602+01:00","Level":"Information","MessageTemplate":"Iteration {I}","RenderedMessage":"Iteration 9","Properties":{"I":9}}
{"Timestamp":"2016-07-07T15:42:35.2775603+01:00","Level":"Information","MessageTemplate":"Exiting Application","RenderedMessage":"Exiting Application"}

We now have Json output – which I’m sure was obvious by the use of the JsonFormatter. The use of renderMessage : true (the default for this is false) ensures the output includes the rendered message as well as the MessageTemplate etc. The rendered message is equivalent to what the textual output would be.

Returning to our original code (the Main method) we have the following Log.Logger.Information(“Iteration {I}”, i);. You’ll notice we are using the placeholder {} syntax. The “I” within the {} could be named anything you want. It’s basically the property name uses in the Json Properties field. Ofcourse we can just as easily change this to something a little more meaningful such as

Log.Logger.Information("Iteration {Iteration}", i);

What is important though is to not fall into the trap of using a String.Format for this as this would lose the Iteration property name.

Adding context

Often we’ll want to log with some more context, for example what class we’re logging from, especially in situations where we might have similar output messages throughout our code. In such cases we can use the ForContext method

Log.Logger.ForContext<Program>().Information("Application Started");

this will cause our log file output to include the property SourceContext which a value of the namespace.classname for example

"Properties":{"Iteration":9,"SourceContext":"SerilogTest.Program"

Note: You might prefer to store the ILogger returned by Log.Logger.ForContext() instead of having the long line of code every time.

Clean disposal/completion of logging

Depending upon the logging mechanism used within Serilog, you may need to call the following when your application shutsdown

Log.CloseAndFlush();

this is especially relevent if the logging mechanism is buffered.

That’s it – straightforward, easy to use and now we can read the Json log files into some appropriately capable application and query or group etc. our log entries.

Embedded NoSQL with LiteDB

I was looking for a simple file based object/document database and came across LiteDB. This gives similar functionality to MongoDB.

If you’re looking for good documentation on LiteDB, I would suggest going to Getting Started. I’ll undoubtedly duplicate some/much of what’s written there in this post which is mainly aimed at reminding me how to get up and running with LiteDB.

Getting started

It’s so easy to get started with LiteDB. Let’s first define an object model for some data that we might wish to store.

public class Artist
{
   public string Name { get; set; }
   public IList<string> Members { get; set; }
}

public class Album
{
   public int Id { get; set; }
   public Artist Artist { get; set; }
   public string Name { get; set; }
   public string Genre { get; set; }
}

We need an Id property or a property marked with a BsonId attribute on our POCO object. Whilst we can get away without this for some operations, updates (for one will fail) without the Id property/BsonId.

To use LiteDB, simply install the nuget package LiteDB and now here’s the bare minimum to create/open a LiteDB database file and get a reference to the collection for our CRUD operations

using (var db = new LiteDatabase("Albums.db"))
{
   var albums = db.GetCollection<Album>("albums");
   // now we can carry out CRUD operations on the data
}

Easy enough. The LiteCollection returned from GetCollection allows us to to work on our data in a very simple manner.

For example, let’s insert a new album

albums.Insert(
   new Album
   {
      Artist = new Artist
      {
         Name = "Led Zeppelin",
         Members = new List<string>
         {
            "Jimmy", "Robert", "JP", "John"
         }
      },
      Name = "Physical Graffiti",
      Genre = "Rock"
   });

How about retrieving all the data, we can use

var results = albums.FindAll();

We can also query for specific data using a predicate, for example

var r = albums.Find(a => a.Artist.Name == "Alice Cooper");

Note: in this example, the Artist.Name property has not been indexed, so performance would be improved by setting an index on the data.

To update we need to get the instance from LiteDB (or at least know the Id) and then makes the changes as follows

var zep = albums.Find(a => a.Artist.Name == "Led Zeppelin").First();
zep.Artist.Members[2] = "John Paul Jones";
albums.Update(zep);

Obviously in the above we’re assuming there’s at least one item (by calling First() as obviously there might be zero, one or multiple returns), the key is how we simply call the Update method.

Deleting all items from a database can be achieved, obviously by deleting the DB file or using

albums.Delete(Query.All());

or we can delete an individual item by calling

// use the Id property 
albums.Delete(album.Id); 

// or

// use a query type syntax
albums.Delete(x => x.Artist.Name == "Led Zeppelin"); 

// or

// using the Query syntax similar to deleting all items
albums.Delete(Query.EQ("Artist.Name", new BsonValue("Led Zeppelin")));

Obviously the Query syntax seems a little over the top for most things, but offers more query like syntax if required.

And there’s more

Okay I’m not intending to document everything in this single post but I have to just touch on transactions. It’s great to see the ability to use ACID transactions for use with LiteDB.

So to ensure we only commit when all operations are successful we simply use

var albums = db.GetCollection<Album>("albums");

db.BeginTrans();
// multiple operations against LiteDB
db.Commit();

Multiple Inserts

It seems (from the documentation etc.) that when we carry out an insert, LiteDB creates an “auto-transaction” around the insert for us (see Transactions and Concurrency. As such we should be aware that if we’re creating many inserts (for example from a list of objects) then it’s best from a performance point of view to not call insert multiple times.

Instead either using the IEnumerable overload of the Insert method or wrap all inserts within a transaction (I’m assuming this would also work – not yet tested).

This make sense, but can be easily forgotten.