Introduction
NBench is a performance testing framework which we can run from our unit tests to profile method performance and (hopefully) ensure any refactoring or other changes will highlight possible performance issues instantly (well at least when the unit tests are run which after all should be on every check-in).
So what we want to do is write some performance based unit test around a method, possibly the method is currently slow and we want to refactor it or it’s a critical point where performance must exceed a certain level, i.e. the method call must complete in less than n ms.
Note: Libraries such as NUnit already have a capability for profiling the amount of time the test takes to complete, but NBench offers this and a lot more besides.
We can write a performance test around our code which will fail if the method takes longer than expected to complete.
NBench is much more than just a stopwatch/timer though. We can benchmark memory usage, GC’s and counters.
Getting Started
My current priority is classic performance testing, i.e. how fast is this method. So let’s start by writing some code and some tests using NBench for this and see how things fit together.
- Create a Class Library project
- Add the Nuget package – i.e. Install-Package NBench
- From the Package Manager Console, also run Install-Package NBench.Runner
Note: NBench.Runner is a simple command line runner, which we’ll use initially to test our code.
Let’s create a “Simple” Cache class to run our tests on.
public class SimpleCache<T>
{
private readonly List<T> cache = new List<T>();
public void Add(T item)
{
if(!Contains(item))
cache.Add(item);
}
public bool Contains(T item)
{
return cache.Contains(item);
}
}
Next, let’s write our first NBench performance test, here’s the code we’ll test, which is not an optimal solution to the requirements of writing a cache.
public class CacheTest
{
private SimpleCache<string> cache = new SimpleCache<string>();
[PerfBenchmark(NumberOfIterations = 1,
RunMode = RunMode.Throughput,
TestMode = TestMode.Test,
SkipWarmups = true)]
[ElapsedTimeAssertion(MaxTimeMilliseconds = 2000)]
public void Add_Benchmark_Performance()
{
for (var i = 0; i < 100000; i++)
{
cache.Add(i.ToString());
}
}
}
We need to mark our test with the PerfBenchmark attribute. This in itself doesn’t measure anything, it’s more about telling the runner what to do with the method it’s annotating. So we need to declare some measurements. We’re only currently interested in the elapsed time of the method under test, so we’re using the ElapsedTimeAssertionAttribute and stating that our method should take no longer than 2000 ms to complete.
We can run multiple iterations of the method and the result is the average of the runs. This is especially useful if we’re trying to look at GC’s etc. but for this example we’re just going to run the test once.
The use of TestMode.Test will work like a unit tests, i.e. PASS/FAIL, for information on the other attribute properties, see the NBench github page.
Upon running this code using the NBench.Runner.exe, I get a FAIL (as I expected) this method is a lot slower than I expect it to be with this number of items being added.
So let’s leave the test as it is an see if we can refactor the code. Here’s a quick an dirty version of the cache using a Dictionary
public class SimpleCache<T>
{
private readonly Dictionary<T, T> cache = new Dictionary<T, T>();
public void Add(T item)
{
if (!Contains(item))
cache.Add(item, item);
}
public bool Contains(T item)
{
return cache.ContainsKey(item);
}
}
Now when I run the test with the TestRunner, I get a PASS (well we would expect this implementation to be massively faster that the linear search list version!).
Integration with our build server
Obviously we run a build/continuous integration server, so we want to integrate these tests into our builds. Now it may be that you can use the NBench runner within your build, but (as mentioned in a previous post) there’s already a way to achieve this integration with the likes on NUnit (see NBench Performance Testing – NUnit and ReSharper Integration).
I’ve recreated the code show in the aforementioned blog post here, for completeness
public abstract class PerformanceTestSuite<T>
{
[TestCaseSource(nameof(Benchmarks))]
public void PerformanceTests(Benchmark benchmark)
{
Benchmark.PrepareForRun();
benchmark.Run();
benchmark.Finish();
}
public static IEnumerable Benchmarks()
{
var discovery = new ReflectionDiscovery(
new ActionBenchmarkOutput(report => { }, results =>
{
foreach (var assertion in results.AssertionResults)
{
Assert.True(assertion.Passed,
results.BenchmarkName + " " + assertion.Message);
Console.WriteLine(assertion.Message);
}
}));
var benchmarks = discovery.FindBenchmarks(typeof(T)).ToList();
foreach (var benchmark in benchmarks)
{
var name = benchmark.BenchmarkName.Split('+')[1];
yield return new TestCaseData(benchmark).SetName(name);
}
}
}
Now simple derive out test from PerformanceTestSuite, like this
public class CacheTest : PerformanceTestSuite<CacheTest>
{
// our code
}
Benchmarking GC collections
What if we want to look at measuring the method in terms of garbage collection and it’s possible affect on your code’s performance. NBench inclues the GCTotalAssertAttribute. Just add the following to the previous method
[GcTotalAssertion(GcMetric.TotalCollections,
GcGeneration.Gen0, MustBe.ExactlyEqualTo, 0.0d)]
public void Add_Benchmark_Performance()
{
// our code
}
This, as one might expect, causes a failure on my test as it’s expecting ZERO Gen 0 collections. With this FAIL information we can now update our test to something more realistic or look at whether we can alter our code to pass the test (obviously this is less likely for a caching class).
Pre and Post benchmarking code
We have the equivalent of a setup/teardown attributes. A PerfSetupAttribute marked method is run before the benchmarking tests, this is useful for setting up the counters or the likes used within the tests and a PerCleanupAttribute is used on a method to clean up any code, post test.
Available Assertions
In the current release of NBench we have the following assertions
- MemoryAssertionAttribute – asserts based upon the memory allocated
- GcTotalAssertionAttribute – asserts based upon total GC’s
- ElapsedTimeAssertionAttribute – asserts based upon the amount of time a method takes
- CounterTotalAssertionAttribute – asserts based upon counter collection
- GcThroughputAssertionAttribute – asserts based upon GC throughput (i.e. average time)
- CounterThroughputAssertionAttribute – asserts based upon counter throughput (i.e. average time)
- PerformanceCounterTotalAssertionAttribute – asserts based upon total average performance counter values
- PerformanceCounterTotalAssertionAttribute – asserts based upon throughput of performance counters