Monthly Archives: February 2014

Pulse and Wait

The Monitor class contains the static methods Pulse and Wait. Well it has more than those two ofcourse, but these are the two this post is interested in.

Both methods must be enclosed in a lock or more explicitly the object that we pass to Pulse and Wait must have had a lock acquired against it. For example

private readonly object sync = new object();

// thread A
lock(sync)
{
   Monitor.Wait(sync);
}

// thread B
lock(sync)
{
   Monitor.Pulse(sync);
}

In the above code, we have two threads. Thread A acquires a lock on sync and then enters a Wait, at which point the lock is released but the thread now blocks until it reacquires the lock. Meanwhile, thread B acquires the lock on the sync object. It calls Monitor.Pulse which basically moved the waiting thread (thread A) to the ready queue and when thread B releases the lock (i.e. exits the lock block) then the next ready thread (in this case thread A) reacquires the lock and any code after the Monitor.Wait would be executed until it exits the lock block and releases the lock.

Producer-consumer pattern

Okay, I’ll admit to a lack of imagination – the producer consumer pattern also known as the producer consumer queue, is a standard sample for showing Pulse and Wait in use and the excellent Threading C# posts by Joseph Albahari are probably the best place to look for all C# threading information, but we’re going to walk through the producer consumer queue here anyway.

So the producer consumer queue is simply put, a queue whereby multiple threads may add to the queue and as data is added a thread within the queue does something with the data, see Producer–consumer problem.

This example creates one or more threads (as specified in the threadCount) which will be used to process of items from the queue.

The Enqueue method locks then adds the action to the queue and then pulses. The “processing” threads wait for the lock on the sync object being released and makes sure there are still items in the queue (otherwise the threads goes into a wait again). Assuming there is an item in the queue we get the item within the lock to ensure no other thread will have removed it then we release the lock and invoke the action before checking for more items to be processed.

public class ProducerConsumerQueue
{
   private readonly Task[] tasks;
   private readonly object sync = new object();
   private readonly Queue<Action> queue;

   public ProducerConsumerQueue(int threadCount, CancellationToken cancellationToken)
   {
      Contract.Requires(threadCount > 0);

      queue = new Queue<Action>();
      cancellationToken.Register(() => Close(false));

      tasks = new Task[threadCount];
      for (int i = 0; i < threadCount; i++)
      {
         tasks[i] = Task.Factory.StartNew(Process, TaskCreationOptions.LongRunning);
      }
   }

   public void Enqueue(Action action)
   {
      lock (sync)
      {
         queue.Enqueue(action);
         Monitor.Pulse(sync);
      }
   }

   public void Close(bool waitOnCompletion = true)
   {
      for (int i = 0; i < tasks.Length; i++)
      {
         Enqueue(null);
      }
      if (waitOnCompletion)
      {
         Task.WaitAll(tasks);
      }
   }

   private void Process()
   {
      while (true)
      {
         Action action;
         lock (sync)
         {
            while(queue.Count == 0)
            {
                Monitor.Wait(sync);
            }
            action = queue.Dequeue();
         }
         if (action == null)
         {
            break;
         }
         action();
      }
   }
}

Here’s a simple sample of code to interact with the ProducerConsumerQueue

CancellationTokenSource cts = new CancellationTokenSource();

ProducerConsumerQueue pc = new ProducerConsumerQueue(1, cts.Token);
pc.Enqueue(() => Console.WriteLine("1"));
pc.Enqueue(() => Console.WriteLine("2"));
pc.Enqueue(() => Console.WriteLine("3"));
pc.Enqueue(() => Console.WriteLine("4"));

// various ways to exit the queue in an orderly manner
cts.Cancel();
//pc.Enqueue(null);
//pc.Close();

So in this code we create a ProducerConsumerQueue with a single thread, actions are added to the queue via Enqueue and as the ProducerConsumerQueue has only a single thread and as all the items were added from a single thread, each item is simply invoked in the order they were added to the queue. However we could have been adding from multiple threads as the ProducerConsumerQueue is thread safe. Had we created the ProducerConsumerQueue with multiple threads then the order of processing may also be different.

Waiting for a specified number of threads using the CountdownEvent Class

In a previous post I discussed the Barrier class. The CountdownEvent class is closely related to the Barrier in that it can be used to block until a set number of signals have been received.

One thing you’ll notice is that the signal on the Barrier is tied to a wait, in other words you
SignalAndWait on a thread, whereas the CountdownEvent has a Signal and a separate Wait method, hence threads could signal they’ve reach a point then continue anyway or call Wait after they call Signal equally a thread could signal multiple times.

The biggest difference with the Barrier class is that the CountdownEvent does not automatically reset once the specified number of signals have been received. So, once the CountdownEvent counter reaches zero any attempt to Signal it will result in an InvalidOperationException.

You can increment the participants (or in this case we say that we’re incrementing the count) at runtime, but only before the counter reaches zero.

Time for some sample code

class Program
{
   static CountdownEvent ev = new CountdownEvent(5);
   static Random random = new Random();

   static void Main()
   {
      Task horse1 = Task.Run(() => 
            GetReadyToRace("Horse1", random.Next(1, 1000)));
      Task horse2 = Task.Run(() => 
            GetReadyToRace("Horse2", random.Next(1, 1000)));
      Task horse3 = Task.Run(() => 
            GetReadyToRace("Horse3", random.Next(1, 1000)));
      Task horse4 = Task.Run(() => 
            GetReadyToRace("Horse4", random.Next(1, 1000)));
      Task horse5 = Task.Run(() => 
            GetReadyToRace("Horse5", random.Next(1, 1000)));

      Task.WaitAll(horse1, horse2, horse3, horse4, horse5);
   }

   static void GetReadyToRace(string horse, int speed)
   {
      Console.WriteLine(horse + " arrives at the race course");
      ev.Signal();

      // wait a random amount of time before the horse reaches the starting gate
      Task.Delay(speed);
      Console.WriteLine(horse + " arrive as the start gate");
      ev.Wait();
  }
}

In the above code, each thread signals when it “arrives at the race course”, then we have a delay for effect and then we wait for all threads to signal. The CountdownEvent counter is decremented for each signal and threads wait until the final thread signals the CountdownEvent at which time the threads are unblocked and able to complete.

As stated previously unlike the Barrier once all signals have been received there’s no reset of the CountDownEvent, it has simply finished it’s job.

Using ThreadStatic and ThreadLocal

First off these are two different objects altogether, ThreadStatic is actually ThreadStaticAttribute and you mark a static variable with the attribute. Whereas ThreadLocal is a generic class.

So why are we looking at both in the one post?

Both ThreadStatic and ThreadLocal are used to allow us to declare thread specific values/variables.

ThreadStatic

A static variable marked with the ThreadStatic attribute is not shared between threads, therefore each thread gets it’s own instance of the static variable.

Let’s look at some code

[ThreadStatic]
static int value;

static void Main()
{
   Task t1 = Task.Run(() =>
   {
      value++;
      Console.WriteLine("T1: " + value);
   });
   Task t2 = Task.Run(() =>
   {
      value++;
      Console.WriteLine("T2: " + value);
   });
   Task t3 = Task.Run(() =>
   {
      value++;
      Console.WriteLine("T3: " + value);
   });

   Task.WaitAll(t1, t2, t3);
}

The output from this (obviously the ordering may be different) is

T3: 1
T1: 1
T2: 1

One thing to watch out for is that if we initialize the ThreadStatic variable, for example if we write the following

[ThreadStatic]
static int value = 10;

you need to be aware that this is initialized only on the thread it’s declared on, all the threads which use value will get a variable initialised with it’s default value, i.e. 0.

For example, if we change the code very slightly to get

[ThreadStatic]
static int value = 10;

static void Main()
{
   Task t1 = Task.Run(() =>
   {
      value++;
      Console.WriteLine("T1: " + value);
   });
   Task t2 = Task.Run(() =>
   {
      value++;
      Console.WriteLine("T2: " + value);
   });
   Task t3 = Task.Run(() =>
   {
      value++;
      Console.WriteLine("T3: " + value);
   });

   Console.WriteLine("Main thread: " + value);
   
   Task.WaitAll(t1, t2, t3);
}

The output will look something like

Main thread: 10
T2: 1
T1: 1
T3: 1

Finally as, by definition each variable is per thread, operations upon the instance of the variable are thread safe.

ThreadLocal

Like the ThreadStatic attribute, the ThreadLocal class allows us to declare a variable which is not shared between threads, one of the extra capabilities of this class is that we can initialize each instance of the variable as the class the supplied factory method to create and/or initialize the value to be returned. Also, unlike ThreadStatic which only works on static fields, ThreadLocal can be applied to static or instance variables.

Let’s look at the code

static void Main()
{
   ThreadLocal<int> local = new ThreadLocal<int>(() =>
   {
      return 10;
   });

   Task t1 = Task.Run(() =>
   {
      local.Value++;
      Console.WriteLine("T1: " + local.Value);
   });
   Task t2 = Task.Run(() =>
   {
      local.Value++;
      Console.WriteLine("T2: " + local.Value);
   });
   Task t3 = Task.Run(() =>
   {
      local.Value++;
      Console.WriteLine("T3: " + local.Value);
   });

   Task.WaitAll(t1, t2, t3);
   local.Dispose();
}

The output order may be different, but the output will read something like

T2: 11
T3: 11
T1: 11

As you can see, each thread altered their own instance of the thread local variable and more over we were able to set the default value to 10.

The ThreadLocal class implements IDisposable, so we should Dispose of it when we’ve finished with it.

Apart from Dispose all methods and protected members are thread safe.

Waiting for a specified number of threads using the Barrier Class

First off, if you are looking for the definitive online resource on threading in C#, don’t waste your time here. Go to Threading in C# by Joseph Albahari. This is without doubt the best resource anywhere on C# threading, in my opinion (for what it’s worth).

This said, I’m still going to create this post for my own reference

The Barrier Class

The Barrier class is a synchronization class which allows us to block until a set number of threads have signaled the Barrier. Each thread will signal and wait and thus be blocked by the barrier until the set number of signals has been reach at which time they will all be being unblocked, allowing them to continue. Unlike the CountdownEvent, when the Barrier has been signalled the specified number of times it resets and can block again and again awaiting for specified number of threads to signal.

Where would a synchronization class be without a real world analogy – so, let’s assume we are waiting for a known number of race horses (our threads) to arrive at a start barrier (our Barrier class) and once they all arrive we can release them. Only when they’ve all reached the end line (the Barrier class again) do we output the result of the race.

Let’s look at some code (note: this sample can just be dropped straight into a console app’s Program class)

static Barrier barrier = new Barrier(5, b => 
         Console.WriteLine("--- All horse have reached the barrier ---"));
static Random random = new Random();

static void Main()
{
   Task horse1 = Task.Run(() => Race("Horse1", random.Next(1, 1000)));
   Task horse2 = Task.Run(() => Race("Horse2", random.Next(1, 1000)));
   Task horse3 = Task.Run(() => Race("Horse3", random.Next(1, 1000)));
   Task horse4 = Task.Run(() => Race("Horse4", random.Next(1, 1000)));
   Task horse5 = Task.Run(() => Race("Horse5", random.Next(1, 1000)));

   // this is solely here to stop the app closing until all threads have completed
   Task.WaitAll(horse1, horse2, horse3, horse4, horse5);
}

static void Race(string horse, int speed)
{
   Console.WriteLine(horse + " is at the start gate");
   barrier.SignalAndWait();

   // wait a random amount of time before the horse reaches the finish line
   Task.Delay(speed);
   Console.WriteLine(horse + " reached finishing line");
   barrier.SignalAndWait();
}

Maybe I’ve taken the horse race analogy a little too far :)

Notice that the Barrier constructor allows us to add an action which is executed after each phase, i.e. when the specified number of threads have signalled the barrier.

We can add participants to the barrier at runtime, so if we were initially waiting for three threads and these created another three threads (for example), we could then notify the barrier that we want to add the three more threads using the AddParticipant or AddParticipants methods. Likewise we could reduce the participant count using RemoveParticipant or RemoveParticipants.

As you can see from the code above, when a thread wishes to signal it’s completed a phase of it’s processing it calls the SignalAndWait method on the barrier. With, as the name suggests, signals the Barrier class then waits until the Barrier releases it.

Creating Local Packages with NuGet

NuGet allows us to download and install packages to our solutions using the NuGet manager in Visual Studio (or from the command line using nuget.exe) from a remote server – for example, nuget.org. But what if you want to host your own packages locally or remotely (outside of nuget.org) for example due to workplace policies or simply not wanting to publish your own libraries to a public server.

All the information contained in this post was gleaned from the references below. These are fantastic posts and I would urge anyone visiting this site to go and read those resources. What I want to do here is just distill things down to some basics to remind myself how to quickly and easily reproduce packages and spec for packages

Getting Started

Let’s start of by creating a local (i.e. on your development machine) NuGet package folder.

  1. Simply create a folder on your machine which will be used to store your packages, let’s call it LocalPackages
  2. In Visual Studio 2012, select Tools | NuGet Package Manager | Package Manager Settings
  3. In Package Sources you’ll see the nuget.org source, click the + button, give your source a name such as “Local Feed” and set the Source to the location of your LocalPackages folder
  4. Press OK and we’ve now added a new package source

At the point we now have Visual Studio set up to read from our packages folder – now we need to create some packages.

Creating a package

So we need to create a nupkg file (which is basically a compressed file which includes binaries and configuration – change the extension to .zip and open with your preferred zip editor/viewer to see the layout). This can be created using the Package Explorer GUI or via a nuspec file and nugget.exe.

If you don’t already have it, download and install the command line version of NuGet which can be found here see Command Line Utility. I placed mine in the folder I created previously, LocalPackages, as I will run it from here to work with the nupkg files etc.

As I prefer to let tools do the heavy lifting for me, I’ve downloaded the clickonce application, Package Explorer GUI.

With the package explorer we can easily create nupkg files without a care in the world for what’s inside them :)

So let’s create a package very quickly using the Package Explorer GUI

  1. If you haven’t already done so, download/run the Package Explorer Gui
  2. Create a new package
  3. We can now edit the meta data or meta data source from the UI – the metadata editor gives us, a simple GUI with input fields and the meta data source is basically an text editor (well it’s a little better than that but you get the idea). For now let’s stick with the GUI meta data editor with it’s nice input fields
  4. Enter an Id – this is the package name, it’s the name used when installing the package
  5. Supply a version. NuGet allows us to download specific package versions. The version should be in the format 1.2.3
  6. Now supply a human friendly title, if none is supplied the Id is used instead
  7. Now enter a comma separated list of authors
  8. I’m not going to step through every UI element here – check out the Nuspec Reference for a complete list of fields and what’s expected
  9. We can add any framework assemblies and both dependencies and assmebly references via the GUI, but also we need to actually add our assemblies, i.e. what the package has been created to deploy. We drag and drop those into the right hand pane of the GUI tool
  10. We can right click on the right pane to add files and folders and more, all of which are a little beyond this first post
  11. For now just drag and drop an assembly you wish to package and accept the defaults
  12. Save the nupkg file to the LocalPackages folder

If you now attempt to open this nupkg file, you’ll find it’s a binary file. If you prefer you can save the file as a nuspec (An XML file) via the GUI using the File | Save Meta data as… option and then when you are ready you can use

NuGet Pack MyPackageName.1.0.0.nuspec

However you may well find that this fails because the assemblies added may need updating to the actual locations on your hard drive, so open the nuspc file and check the src paths. See the GUI made things easy.

Also when running the NuGet command line tool with Pack you may get a warning about the location of your assmebly, for example mine was by default stored in lib, but running the command line I find it is warning that I should move the file to a framework-specific folder.

For now we’ll not worry about this. Instead let’s see our nuget package being used.

Is our nuget package available

We can now open Visual Studio (if not already opened) and assuming you’ve added your local package as outlined previously.

  1. Go to Tools | NuGet Package Manager | Package Manager Console
  2. Your should see “Package source” in the console, select “Local Feed” or whatever name you gave for your local package location
  3. Type
    Get-Package -ListAvailable
    

    and you should see the package you created

Adding a local nuget package to our application

  1. Create yourself a new application to test your nuget package in
  2. Now we can add our package using the Package Manager Console, for example
    Install-Package MyPackage
    

    or from the References folder on your solution, right mouse click and select Manage NuGet Packages, from Online, select your Local Feed and select your packages as normal and install

Remote Packages

Check out the excellent post Hosting Your Own NuGet Feeds. The section on “Creating Remote Feeds” get’s you up and running in no time at all.

But let’s list the steps anyway

  1. Creating a empty web application
  2. Add the NuGet.Server package using NuGet
  3. Change the WebConfig packagesPath value to match the path to your packages
  4. Run the app – yes it’s that simple
  5. In Visual Studio 2012, select Tools | NuGet Package Manager | Package Manager Settings
  6. In Package Sources click the + button, give your remote source a name such as “My Package Feed” and set the Source to the URL of your server which you just created and ran

That’s is for now.

References

Nuspec Reference
Hosting Your Own NuGet Feeds
Using A GUI (Package Explorer) to build packages
Creating and Publishing a Package
Package Manager Console Powershell Reference

More Moq

In the previous post we touched on the fundamentals of using Moq, now we’ll delve a little deeper.

Please note, MOQ version 4.20 has introduced a SponsoreLink which appears to send data to some third party. See discussions on GitHub.

Verifications

So we can verify that a method or property has been called using the following

var mock = new Mock<IFeed>();

FeedViewModel vm = new FeedViewModel(mock.Object);
vm.Update();

mock.Verify(f => f.Update());

This assumes that the vm.Update() calls the IFeed.Update().

But what if we are passing a mock into a view model and we want to verify a method on the mock is called but we don’t care about the specific arguments passed into the method. We can do the following (this example uses the IEventAggregator in Caliburn Micro)

var eventMock = new Mock<IEventAggregator>();

PostListViewModel vm = new PostListViewModel(eventMock.Object);

eventMock.Verify(e => e.Subscribe(It.IsAny<PostListViewModel>()), Times.Once);

In the above example, PostListViewModel’s constructor is expected to call the Subscribe method on the IEventAggregator. The actual implementation passes this into the Subscribe method, but we’ll ignore the argument, except that we’re expecting it to be of type PostListViewModel. The It.IsAny() handles this and the Times.Once simply verifies the Subscribe method was called just once.

Fundamental Moq

Moq is a fabulously simple yet powerful mocking framework.

Please note, MOQ version 4.20 has introduced a SponsoreLink which appears to send data to some third party. See discussions on GitHub.

For those unaware of what a mocking framework are and what they have to offer check out Wikipedia’s Mock Object post.

Here’s the fundamentals for starting to use Moq…

Let’s start with a simple interface for us to mock out

public interface IFeed
{
   string Name { get; set; }
   bool Update(int firstNPosts);
}

Now to mock this interface we can simply create a mock object based on the interface, as follows

IMock<IFeed> mock = new Mock<IFeed>();

obviously this is of little use unless we can set up some behaviours, i.e. what happens when a property or method etc. is called. So in the case of this interface we might want to return a known string for Name like this

IMock<IFeed> mock = new Mock<IFeed>();

mock.Setup(f => f.Name).Returns("SomeUniqueName");

// now use the mock in a unit test
IFeed feed = mock.Object;
Assert.Equal("SomeUniqueName", feed.Name);

The code mock.Object gets the actual interface from the mock, i.e. in this case an IFeed.

For methods we do much the same thing

IMock<IFeed> mock = new Mock<IFeed>();

mock.Setup(f => f.Update(10)).Returns(true);

// now use the mock in a unit test
IFeed feed = mock.Object;
Assert.True(feed.Update(10));

The examples above are a little contrived, after all we obviously are not really going to test that the mock we setup, but instead want to test something that uses the mock.

For example, what about if we have a FeedViewModel which takes an IFeed for display via WPF, something like the following

public class FeedViewModel : PropertyChangedBase
{
   private IFeed feed;
   private bool showProgress;
   private string error;

   public FeedViewModel(IFeed feed)
   {
      this.feed = feed;
   }

   public string Name { get { return feed.Name; } }

   public void Update()
   {
      ShowProgress = true; 
      bool updated = feed.Update(10);
      if(updated)
      {
         Error = "Failed to update";
      }
      ShowProgress = false;
   }

   public bool ShowProgress
   {
      get { return showProgress; }
      set { this.RaiseAndSetIfChanged(ref showProgress, value); }
   }

   public string Error
   {
      get { return error; }
      set { this.RaiseAndSetIfChanged(ref error, value); }
   }
}

Now we can test our FeedViewModel by creating a mock of the IFeed and passing into the FeedViewModel, allowing us to set up mock calls and check our expectations in tests.

The previous examples showed how we can use mocks to setup return values or just take the place of methods when they are called. We may, however, also wish to verify that our code did correctly call our mock objects one or more times – we can do this by marking the set up as Verifiable and then Verify the mocks at the end of the unit test, as per the following

Mock<IFeed> mock = new Mock<IFeed>();
mock.Setup(f => f.Posts).Returns(posts).Verifiable();

FeedViewModel model = new FeedViewModel(mock.Object);
Assert.Equal(posts.Count, model.PostCount);

mock.Verify();

In this example posts is a list of Post objects. Our view model has a property PostCount which uses the list of posts. We want to verify that the Posts property on our (new extended IFeed) was actually called. So we mark it as Verifiable() and at the end of the test we use mock.Verify(); to verify it was actually called.

We can be a little more explicit with our verifications, for example if we expect a method to be called twice, we can verify this as follows

Mock<IFeed> mock = new Mock<IFeed>();
mock.Setup(f => f.Update()).Returns(true);

FeedViewModel model = new FeedViewModel(mock.Object);
// do something on the model which will call the feed Update method

mock.Verify(f => f.Update(), Times.Exactly(2));

So this will fail verification (with a MockException) if the Update() method is not called exactly 2 times.

We also may wish to raise events on the mocked object to see what happens in our object under test, for example assume we have a StatusChanged event on our IFeed which the FeedViewModel should subscribe to and change an Updating property upon the relevant event arguments

Mock<IFeed> mock = new Mock<IFeed>();
FeedViewModel model = new FeedViewModel(mock.Object);

mock.Raise(f => f.StatusChanged += null, 
       new StatusChangedEventArgs(StatusChange.UpdateStarted));
Assert.True(model.Updating);

mock.Raise(f => f.StatusChanged += null, 
        new StatusChangedEventArgs(StatusChange.UpdateEnded));
Assert.False(model.Updating);

In this code we’re raising StatusChanged events on the IFeed mock and expecting to see the view model’s Updating property change.

Using Common.Logging with Caliburn Micro

Caliburn Micro offers several extension points into it, one of which is for logging. For example

LogManager.GetLog = type => new MyLogManager(type);

LogManager.GetLog is a Func and allows us to supply our own logging mechanism.

As you can see, Caliburn Micro includes the interface ILog (the second generic parameter in the Func) for us to implement our own logging code. However I also rather like using the Common.Logging library to abstract the specific logger (i.e. NLog or log4net etc.) from my logging code. So how can we implement our logging using the Common.Logging ?

We just need to implement our own Caliburn Micro ILog and redirect it to the Common.Logging framework. For example

using CommonLogging = global::Common.Logging;
public class CommonLogManager : Caliburn.Micro.ILog
{
   private readonly CommonLogging.ILog log;

   public CommonLogManager(Type type)
   {
      log = CommonLogging.LogManager.GetLogger(type);
   }

   public void Error(Exception exception)
   {
      log.Error("Exception", exception);
   }

   public void Info(string format, params object[] args)
   {
      log.Info(String.Format(format, args));
   }

   public void Warn(string format, params object[] args)
   {
      log.Warn(String.Format(format, args));
   }
}

The code is a little convoluted with namespaces due to ILog existing in both Caliburn Micro and Common.Logging, but unfortunately being different types.

Now just place the following code into the Configure method on your implementation of a Caliburn Micro Bootstrapper

LogManager.GetLog = type => new CommonLogManager(type);

MahApps, where’s the drop shadow ?

In the MahApps Metro UI, specifically the MetroWindow we can turn the drop shadow on using the following. In code…

BorderlessWindowBehavior b = new BorderlessWindowBehavior
{
   EnableDWMDropShadow = true,
   AllowsTransparency = false
};

BehaviorCollection bc = Interaction.GetBehaviors(window);
bc.Add(b);

Note: We must set the Allow Transparency to false to use the EnableDWMDropShadow.

In XAML, we can use the following

<i:Interaction.Behaviors>
   <behaviours:BorderlessWindowBehavior 
      AllowsTransparency="False" 
      EnableDWMDropShadow="True" />
</i:Interaction.Behaviors>

Binding Ninject to an instance based upon the dependency type

I want to bind the Common.Logging.ILog to an implementation which can take a type argument as it’s parameter, i.e. We can use the Common.Logging.LogManager.GetManager method by passing a type into it, as per the following code

LogManager.GetLogger(typeof(MyType));

So how can we simply declare a dependency on our type to ILog and have it automatically get the logger with the type parameter.

The code speaks for itself, so here it is

IKernel kernel = new StandardKernel();

kernel.Bind<ILog>().ToMethod(ctx =>
{
   Type type = ctx.Request.ParentContext.Request.Service;
   return LogManager.GetLogger(type);
});

In the above code we bind to a method, thus we can dynamically handle the binding. From the IContext ctx we can find which type requested the binding and then use this in the call to GetLogger.

So with this the dependency object simply includes the following

public class MyClass
{
   // other methods, properties etc.

   [Inject]
   public ILog Logger { get; set; }
}

and everything “magically” binds together to get the logger with the type set to typeof(MyClass).