Category Archives: Testing

Beware those async void exceptions in unit tests

This is a cautionary tale…

I’ve been moving my builds to a new server box and in doing so started to notice some problems. These problems didn’t exist on the previous build box and this might be down to the updates versions of the tools, such as nant etc. I was using on the new box.

Anyway the crux of the problem was I was getting exceptions when the tests were run, nothing told me what assembly or what test, I just ended up with

NullReference exceptions and a stack trace stating this

System.Runtime.CompilerServices.AsyncMethodBuilderCore.b__0(System.Object)

Straight away it looks like async/await issues.

To cut a long story short.

I removed all the Test assemblies, then one by one place them back into the build, running the “test” target in nant each time until I found the assembly with the problems. Running the tests this way also seemed to report more specifics about the area of the exception.

As we know, we need to be particularly careful handling exceptions that occur within an async void method (mine exist because they’re event handlers which are raised upon property changes in a view model).

Interestingly when I ran the Resharper Test Runner all my tests passed, but switching to running in the tests in Debug showed up all the exceptions (ofcourse assuming that you have the relevant “Exceptions” enabled within Visual Studio). The downside of this approach, ofcourse, is that I also see all the exceptions that are intentionally tested for, i.e. ensuring my code exceptions when it should. But it was worth it to solve these problems.

Luckily none of these exceptions would have been a problem in normal running of my application, they were generally down to missing mocking code or the likes, but it did go to show I could be smug in believing my tests caught every issue.

Unit tests in Go

In the previous post I mentioned that the Go SDK often has unit tests alongside the package code. So what do we need to do to write unit tests in Go.

Let’s assume we have the package (from the previous post)

package test

func Echo(s string) string {
	return s
}

assuming the previous code is in file test.go and we then create a new file in the same package/folder named test_test.go (I know the name’s not great).

Let’s look at the code within this file

package test_test

import "testing"
import "Test1/test"

func TestEcho(t *testing.T) {
	expected := "Hello"
	actual := test.Echo("Hello")

	if actual != expected {
		t.Error("Test failed")
	}
}

So Go’s unit testing functionality comes in the package “testing” and our tests must start with the word Test and takes a pointer to type T. T gives us the methods to create failures etc.

In Gogland you can select the test_test.go file, right mouse click and you’ll see the Run, Debug and Run with coverage options.

NUnit’s TestCaseSourceAttribue

When we use the TestCaseAttribute with NUnit tests, we can define the parameters to be passed to a unit test, for example

[TestCase(1, 2, 3)]
[TestCase(6, 3, 9)]
public void Sum_EnsureValuesAddCorrectly1(double a, double b, double result)
{
   Assert.AreEqual(result, a + b);
}

Note: In a previous release the TestCaseAttribute also had Result property, but this doesn’t seem to be the case now, so we’ll expect the result in the parameter list.

This is great, but what if we want our data to come from a dynamic source. We obviously cannot do this with the attributes, but we could using the TestCaseSourceAttribute.

In it’s simplest form we could rewrite the above test like this

[Test, TestCaseSource(nameof(Parameters))]
public void Sum_EnsureValuesAddCorrectly(double a, double b, double result)
{
   Assert.AreEqual(result, a + b);
}

private static double[][] Parameters =
{
   new double[] { 1, 2, 3 },
   new double[] { 6, 3, 9 }
};

an alternative to the above is to return TestCaseData object, as follows

[Test, TestCaseSource(nameof(TestData))]
public double Sum_EnsureValuesAddCorrectly(double a, double b)
{
   return a + b;
}

private static TestCaseData[] TestData =
{
   new TestCaseData(1, 2).Returns(3),
   new TestCaseData(6, 3).Returns(9)
};

Note: In both cases, the TestCaseSourceAttribute expects a static property or method to supply the data for our test.

The property which returns the array (above) doesn’t need to be in the Test class, we could have a separate class, such as

[Test, TestCaseSource(typeof(TestDataClass), nameof(TestData))]
public double Sum_EnsureValuesAddCorrectly(double a, double b)
{
   return a + b;
}

class TestDataClass
{
   public static IEnumerable TestData
   {
      get
      {
         yield return new TestCaseData(1, 2).Returns(3);
         yield return new TestCaseData(6, 3).Returns(9);
      }
   }
}

Extended or unit test capabilities using the TestCaseSource

If we take a look at NBench Performance Testing – NUnit and ReSharper Integration we can see how to extend our test capabilities using NUnit to run our extensions. i.e. with NBench we want to create unit tests to run performance tests within the same NUnit set of tests (or separately but via the same test runners).

I’m going to recreate a similar set of features for a more simplistic performance test.

Note: this code is solely to show how we can create a similar piece of testing functionality, it’s not mean’t to be compared to NBench in any way, plus NUnit also has a MaxTimeAttribute which would be sufficient for most timing/performance tests.

Let’s start by creating an attribute which will use to detect methods which should be performance tested. Here’s the code for the attribute

[AttributeUsage(AttributeTargets.Method)]
public class PerformanceAttribute : Attribute
{
   public PerformanceAttribute(int max)
   {
      Max = max;
   }

   public int Max { get; set; }
}

The Max property defines a max time (in ms) that a test method should take. If it takes longer than the Max value, we expect a failing test.

Let’s quickly create some tests to show how this might be used

public class TestPerf : PerformanceTestRunner<TestPerf>
{
   [Performance(100)]
   public void Fast_ShouldPass()
   {
      // simulate a 50ms method call
      Task.Delay(50).Wait();
   }

   [Performance(100)]
   public void Slow_ShouldFail()
   {
      // simulate a slow 10000ms method call
      Task.Delay(10000).Wait();
   }
}

Notice we’re not actually marking the class as a TestFixture or the methods as Tests, as the base class PerformanceTestRunner will create the TestCaseData for us and therefore the test methods (as such).

So let’s look at that base class

public abstract class PerformanceTestRunner<T>
{
   [TestCaseSource(nameof(Run))]
   public void TestRunner(MethodInfo method, int max)
   {
      var sw = new Stopwatch();
      sw.Start();
      method.Invoke(this, 
         BindingFlags.Instance | BindingFlags.InvokeMethod, 
         null, 
         null, 
         null);
      sw.Stop();

      Assert.True(
         sw.ElapsedMilliseconds <= max, 
         method.Name + " took " + sw.ElapsedMilliseconds
      );
   }

   public static IEnumerable Run()
   {
      var methods = typeof(T)
         .GetMethods(BindingFlags.Public | BindingFlags.Instance);
      
      foreach (var m in methods)
      {
         var a = (PerformanceAttribute)m.GetCustomAttribute(typeof(PerformanceAttribute));
         if (a != null)
         {
            yield return 
               new TestCaseData(m, a.Max)
                     .SetName(m.Name);
         }
      }
   }
}

Note: We’re using a method Run to supply TestCaseData. This must be public as it needs to be accessible to NUnit. Also we use SetName on the TestCaseData passing the method’s name, hence we’ll see the method as the test name, not the TestRunner method which actually runs the test.

This is a quick and dirty example, which basically locates each method with a PerformanceAttribute and yields this to allow the TestRunner method to run the test method. It simply uses a stopwatch to check how long the test method took to run and compares with the setting for Max in the PerformanceAttribute. If the time to run the test method is less than or equal to Max, then the test passed, otherwise it fails with a message.

When run via a test runner you should see a node in the tree view showing TestPerf, with a child of PerformanceTestRunner.TestRunner, then child nodes below this for each TestCaseData ran against the TestRunner, we’ll see the method names Fast_ShouldPass and Slow_ShouldFail – and that’s it, we’ve reused NUnit, the NUnit runners (such as ReSharper etc.) and created a new testing capability, the Performance test.

Auto generating test data with Bogus

I’ve visited this topic (briefly) in the past using NBuilder and AutoFixture.

When writing unit tests it would be useful if we can create objects quickly and with random or better still semi-random data.

When I say semi-random, what I mean is that we might have some type with an Id property and we know this Id can only have a certain range of values, so we want a value generated for this property within that range, or maybe we would like to have a CreatedDate property with some data that resembles n years in the past, as opposed to just random date.

This is where libraries such as Faker.Net and Bogus come in – they allow us to generate objects and data, which meets certain criteria and also includes the ability to generate data which “looks” like real data. For example, first names, jobs, addresses etc.

Let’s look at an example – first we’ll see what the “model” looks like, i.e. the object we want to generate test data for

public class MyModel
{
   public string Name { get; set; }
   public long Id { get; set; }
   public long Version { get; set; }
   public Date Created { get; set; }
}

public struct Date
{
   public int Day { get; set; }
   public int Month { get; set; }
   public int Year { get; set; }
}

The date struct was included because this mirrors a similar type of object I get from some web services and because it obviously requires certain constraints, hence seemed a good example of writing such code.

Now let’s assume that we want to create a MyModel object. Using Bogus we can create a Facker object and apply constraints or assign random data to. Here’s an example implementation

var modelFaker = new Faker<MyModel>()
   .RuleFor(o => o.Name, f => f.Name.FirstName())
   .RuleFor(o => o.Id, f => f.Random.Number(100, 200))
   .RuleFor(o => o.Version, f => f.Random.Number(300, 400))
   .RuleFor(o => o.Created, f =>
   {
      var date = f.Date.Past();
      return new Date { Day = date.Day, Month = date.Month, Year = date.Year };
   });

var myModel = modelFaker.Generate();

Initially we create the equivalent of a builder class, in this case the Faker. Whilst we can generate a MyModel without all the rules being set, the rules allow us to customize what’s generated to something more meaningful for our use – especially when it comes to the Date type. So in order, the rules state that the Name property on MyModel should resemble a FirstName, the Id is set to a random value within the range 100-200, likewise the Version is constrained to the range 300-400 and finally the Created property is set by generating a past date and assigning the day, month and year to our Date struct.

Finally we Generate an instance of a MyModel object via the Faker builder class. An example of the values supplied by the Faker object are shown below

Created - Day = 12, Month = 7, Year = 2016
Id - 116
Name - Gwen
Version - 312

Obviously this code only works for classes with a default constructor. So what do we do if there’s no default constructor?

Let’s add the following to the MyModel class

public MyModel(long id, long version)
{
   Id = id;
   Version = version;
}

Now we simply change our Faker to look like this

var modelFaker = new Faker<MyModel>()
   .CustomInstantiator(f => 
      new MyModel(
         f.Random.Number(100, 200), 
         f.Random.Number(300, 400)))
   .RuleFor(o => o.Name, f => f.Name.FirstName())
   .RuleFor(o => o.Create, f =>
   {
      var date = f.Date.Past();
      return new Date { Day = date.Day, Month = date.Month, Year = date.Year };
   });

What if you don’t want to create a the whole MyModel object via Faker, but instead, you just want to generate a valid looking first name for the Name property? Or what if you are already using something like NBuilder but want to just use the Faker data generation code?

This can easily be achieved by using the non-generic Faker. Create an instance of it and you’ve got access to the same data, so for example

var f = new Faker();

myModel.Name = f.Name.FirstName();

References

Bogus for .NET/C#

Scientist in the making (aka using Science.NET)

When we’re dealing with refactoring legacy code, we’ll often try to ensure the existing unit tests (if they exist) or new ones cover as much of the code as possible before refactoring it. But there’s always a concern about turning off the old code completely until we’ve got a high confidence in the new code. Obviously the test coverage figures and unit tests themselves should give us that confidence, but wouldn’t it by nice to maybe we instead ran the old and new code in parallel and compare the behaviour or at least the results of the code? This is where the Scientist library comes in.

Note: This is very much (from my understanding) in an alpha/pre-release stage of development, so any code written here may differ from the way the library ends up working. So basically what I’m saying is this code works at the time of writing.

Getting started

So the elevator pitch for Science.NET is that it “allows us to two difference implementations of code, side by side and compare the results”. Let’s expand on that with an example.

First off, we’ll set-up our Visual Studio project.

  • Create a new console application (just because its simple to get started with)
  • From the Package Manager Console, execute Install-Package Scientist -Pre

Let’s start with a very simple example, let’s assume we have a method which returns a numeric value, we don’t really need to worry much about what this value means – but if you like a back story, let’s assume we import data into an application and the method calculates the confidence that the data matches a known import pattern.

So the legacy code, or the code we wish to verify/test against looks like this

public class Import
{
   public float CalculateConfidenceLevel()
   {
       // do something clever and return a value
       return 0.9f;
   }
}

Now our new Import class looks like this

public class NewImport
{
   public float CalculateConfidenceLevel()
   {
      // do something clever and return a value
      return 0.4f;
   }
}

Okay, okay, I know the result is wrong, but this is mean’t to demonstrate the Science.NET library not my Import code.

Right, so what we want to do is run the two versions of the code side-by-side and see whether the always give the same result. So we’re going to simply run these in our console’s Main method for now but ofcourse the idea is this code would be run from wherever you currently run the Import code from. For now just add the following to Main (we’ll discuss strategies for running the code briefly after this)

var import = new Import();
var newImport = new NewImport();

float confidence = 
   Scientist.Science<float>(
      "Confidence Experiment", experiment =>
   {
      experiment.Use(() => import.CalculateConfidenceLevel());
      experiment.Try(() => newImport.CalculateConfidenceLevel());
   });

Now, if you run this console application you’ll see the confidence variable will have the value 0.9 in it as it’s used the .Use code as the result, but the Science method (surely this should be named the Experiment method :)) will actually run both of our methods and compare the results.

Obviously as both the existing and new implementations are run side-by-side, performance might be a concern for complex methods, especially if running like this in production. See the RunIf method for turning on/off individual experiments if this is a concern.

The “Confidence Experiment” string denotes the name of the comparison test and can be useful in reports, but if you ran this code you’ll have noticed everything just worked, i.e. no errors, no reports, nothing. That’s because at this point the default result publisher (which can be accessed via Scientist.ResultPublisher) is an InMemoryResultPublisher we need to implement a publisher to output to the console (or maybe to a logger or some other mechanism).

So let’s pretty much take the MyResultPublisher from Scientist.net but output to console, so we have

 public class ConsoleResultPublisher : IResultPublisher
{
   public Task Publish<T>(Result<T> result)
   {
      Console.WriteLine(
          $"Publishing results for experiment '{result.ExperimentName}'");
      Console.WriteLine($"Result: {(result.Matched ? "MATCH" : "MISMATCH")}");
      Console.WriteLine($"Control value: {result.Control.Value}");
      Console.WriteLine($"Control duration: {result.Control.Duration}");
      foreach (var observation in result.Candidates)
      {
         Console.WriteLine($"Candidate name: {observation.Name}");
         Console.WriteLine($"Candidate value: {observation.Value}");
         Console.WriteLine($"Candidate duration: {observation.Duration}");
      }

      if (result.Mismatched)
      {
         // saved mismatched experiments to DB
      }

      return Task.FromResult(0);
   }
}

Now insert the following before the float confidence = line input our Main method

Scientist.ResultPublisher = new ConsoleResultPublisher();

Now when you run the code you’ll get the following output in the console window

Publishing results for experiment 'Confidence Experiment'
Result: MISMATCH
Control value: 0.9
Control duration: 00:00:00.0005241
Candidate name: candidate
Candidate value: 0.4
Candidate duration: 00:00:03.9699432

So now you’ll see where the string in the Science method can be used.

More…

Checkout the documentation on Scientist.net of the source itself for more information.

Real world usage?

First off let’s revisit how we might actually design our code to use such a library. The example was created from scratch to demonstrate basic use of the library, but it’s more likely that we’d either create an abstraction layer which instantiates and executes the legacy and new code or if available add the new method to the legacy implementation code. So in an ideal worlds our Import and NewImport methods might implement an IImport interface. Thus it would be best to implement a new version of this interface and within the methods call the Science code, for example

public interface IImport
{
   float CalculateConfidenceLevel();
}

public class ImportExperiment : IImport
{
   private readonly IImport import = new Import();
   private readonly IImport newImport = new Import();

   public float CalculateConfidenceLevel()
   {
      return Scientist.Science<float>(
         "Condfidence Experiment", experiment =>
         {
            experiment.Use(() => import.CalculateConfidenceLevel());
            experiment.Try(() => newImport.CalculateConfidenceLevel());
         });
   }
}

I’ll leave the reader to put the : IImport after the Import and NewImport classes.

So now our Main method would have the following

Scientist.ResultPublisher = new ConsoleResultPublisher();

var import = new ImportExperiment();
var result = import.CalculateConfidenceLevel();

Using an interface like this now means it’s both easy to switch from the old Import to the experiment implementation and eventually to the new implementation, but then hopefully this is how we always code. I know those years of COM development make interfaces almost the first thing I write along with my love of IoC.

And more…

Comparison replacement

So the simple example above demonstrates the return of a primitive/standard type, but what if the return is one of our own more complex objects and therefore more complex comparisons? We can implement an

experiment.Compare((a, b) => a.Name == b.Name);

ofcourse we could hand this comparison off to a more complex predicate.

Unfortunately the Science method expects a return type and hence if your aim is to run two methods with a void return and maybe test some encapsulated data from the classes within the experiment, then you’ll have to do a lot more work.

Toggle on or off

The IExperiment interface which we used to call .Use and .Try also has the method RunIf which I mentioned briefly earlier. We might wish to write our code in such a way that the dev environment runs the experiments but production does not, ensuring our end user’s do not suffer performances hits due to the experiment running. We can use RunIf in the following manner

experiment.RunIf(() => !environment.IsProduction);

for example.

If we needed to include this line in every experiment it might be quite painful, so it’s actually more likely we’d use this to block/run specific experiments, so maybe we run all experiments in all environment, except one very slow experiment.

To enable/disable all experiments, instead we can use

Scientist.Enabled(() => !environment.IsProduction);

Note: this method is not in the NuGet package I’m using but is in the current source on GitHub and in the documentation so hopefully it works as expected in a subsequent release of the NuGet package.

Running something before an experiment

We might need to run something before an experiment starts but we want the code within the context of the experiment, a little like a test setup method, we can use

experiment.BeforeRun(() => BeforeExperiment());

in the above we’ll run some method BeforeExperiment() before the experiment continues.

Finally

I’ve not covered all the currently available methods here as the Scientist.net repository already does that, but hopefully I’m given a peek into what you might do with this library.

Returning values (in sequence) using JustMock Lite

InSequence

Okay, so I have some code which is of the format

do 
{
   while(reader.Read())
   {
      // do something 
   }
} while (reader.ReadNextPage())

the basic premise is Read some data from somewhere until the data is exhausted, then read the next page of data and so on, until no data is left to read.

I wanted to unit test aspects of this by mocking out the reader and allowing me to isolate the specific functionality within the method. Ofcourse I could have refactored this method to test just the inner parts of the loop, but this is not always desirable as it still means the looping expectation is not unit tested.

I can easily mock the ReadNextPage to return false to just test one pages of data, but the Read method itself needs to return true initially, but also must return false at some point or the unit test will potentially get stuck in an infinite loop. Hence, I need to be able to eventually return false on the Read method.

Using InSequence, we can return different values on the calls to the Read method, for example using

Mock.Arrange(() => reader.ReadNextPage()).Returns(false);
Mock.Arrange(() => reader.Read()).Returns(true).InSequence();
Mock.Arrange(() => reader.Read()).Returns(false).InSequence();

Here the first call to Read obviously returns true, the next call returns false, so the unit test will actually complete and we’ll successfully test the loop and whatever is within it.

Unit testing and “The current SynchronizationContext may not be used as a TaskScheduler” error

When running unit tests (for example with xUnit) and code that requires a synchronization context, one might get the test failing with the message

The current SynchronizationContext may not be used as a TaskScheduler.

The easiest way to resolve this is to supply your own SynchronizationContext to a unit test class, for example adding a static constructor (for xUnit) or in the SetUp method (in NUnit).

static MyTests()
{
   SynchronizationContext.SetSynchronizationContext(new SynchronizationContext());		
}

Note: xUnit supplies a Synchronization context when using async tests, but when running Reactive Extensions or TPL code it seems we need to supply our own.

Excluding assemblies from Code Coverage

I regularly run Visual Studio’s code coverage analysis to get an idea of my test coverage, but there’s a lot of code in the project that’s auto generated code and I wanted to turn off the code coverage metrics for these assemblies.

I could look to add the ExcludeFromCodeCoverage attribute as outlined in a previous post “How to exclude code from code coverage” but this is a little laborious when you have many types to add this to and also, in some cases, I do not have control of the code gen tools to apply such attributes after every regeneration of the code – so not exactly ideal.

There is a solution as described in the post Customizing Code Coverage Analysis which allows us to create solution wide file to exclude assemblies from code coverage, I’m going to summarize the steps to create the file here…

Creating the .runsettings file

  • Select your solution in the solution explorer and then right mouse click and select Add | New Item…
  • Select XML File and change the name to your solution name with the .runsettings extension (the name needn’t be the solution name but it’s a good starting point).
  • Now I’ve taken the following from Customizing Code Coverage Analysis but reduced it to the bare minimum, I would suggest you refer to the aforementioned post for a more complete file if you need to use the extra features.
    <?xml version="1.0" encoding="utf-8"?>
    <!-- File name extension must be .runsettings -->
    <RunSettings>
      <DataCollectionRunSettings>
        <DataCollectors>
          <DataCollector friendlyName="Code Coverage" uri="datacollector://Microsoft/CodeCoverage/2.0" assemblyQualifiedName="Microsoft.VisualStudio.Coverage.DynamicCoverageDataCollector, Microsoft.VisualStudio.TraceCollector, Version=11.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a">
            <Configuration>
              <CodeCoverage>
                <!-- Match assembly file paths: -->
                <ModulePaths>
                  <Include>
                    <ModulePath>.*\.dll$</ModulePath>
                    <!--<ModulePath>.*\.exe$</ModulePath>-->
                  </Include>
                  <Exclude>
                    <ModulePath>.*AutoGenerated.dll</ModulePath>
                    <ModulePath>.*Tests.dll</ModulePath>
                  </Exclude>
                </ModulePaths>
    
                <!-- We recommend you do not change the following values: -->
                <UseVerifiableInstrumentation>True</UseVerifiableInstrumentation>
                <AllowLowIntegrityProcesses>True</AllowLowIntegrityProcesses>
                <CollectFromChildProcesses>True</CollectFromChildProcesses>
                <CollectAspDotNet>False</CollectAspDotNet>
    
              </CodeCoverage>
            </Configuration>
          </DataCollector>
        </DataCollectors>
      </DataCollectionRunSettings>
    </RunSettings>
    

    In the above code you’ll note I’ve included all dll’s using the Regular Expression .*\.dll$ but then gone on to exclude a couple of assemblies.

    Note: If I do NOT include the .* in the exclude module paths I found that the analysis still included those files. So just typing the correct name of the assembly on it’s own failed and I needed the .* for this to work.

  • The include happens first and then the exclude takes place. Hence we can use wildcards in the include then exclude certain assemblies explicitly.
  • Before we can actually use the runsettings we need to tell Visual Studio to use the runsettings. So before you test your changes you need to select the Test menu item then Test Settings followed by Select Test Settings File. Select your runsettings file.

    Note: you can tick/untick the selected file via the same menu option to turn on/off the runsettings file being used

Now I can run code coverage across my code and will see only the assemblies that matter to me.

Unit testing native C++ code using Visual Studio

Before I begin with this post, let me state that Unit testing native code with Test Explorer explains this topic very well. So why am I writing a post on this? Well I did encounter a couple of issues and felt it worth documenting those along with the “steps” I took to when using the Microsoft unit testing solution for C++ in Visual Studio.

Let’s jump straight in – my intention is to create a simple class which has some setter and getter methods (nothing special) and test the implementations, obviously this is just a simple example but it will cover some of the fundamentals (I hope).

Here’s the steps to get a couple of Visual Studio projects up and running, one being the code we want to test, the second being for the unit tests.

  • Let’s start by creating a DLL for our library/code.
  • Open Visual Studio and create a New Project
  • Select Visual C++ Win32 Project and give it a name (mine’s named MotorController), then press OK
  • When the Win32 Application Wizard appears, press next then select DLL for Application Type, uncheck Security Development Lifecycle and check Export Symbols. I’m not going to use MFC or ATL for this so leave them unchecked, then press Finish
  • Now let’s create the unit test project
  • Select Add | New Project from the solution context menu
  • Select the Visual C++ | Test section and click on Native Unit Test Project
  • Give the unit test project a name (mine’s MotorControllerTests), then press OK
  • Before we can test anything in the MotorController project we need to reference the project
  • Right mouse click on your unit test project and select Properties
  • Select Common Properties | Framework and References
  • Press the Add New Reference button and check the project with code to be tested (i.e. my MotorController project), press OK and OK again on Properties dialog

At this point you should have two projects, one for your code and one for your unit tests. Both are DLL’s and the unit test project includes the code to run the tests via the Test Explorer.

So before we write any code and to ensure all is working, feel free to run the tests…

Select the menu item – Test | Run | Run All Tests. If all goes well, within Test Explorer, you’ll see a single test class named UnitTest1 (unless you renamed this) and a single method TestMethod1 (unless you changed this).

Now let’s go ahead and write some tests. I’m going to assume you’ve used the same names as I have for the objects etc. but feel free to change the code to suit your object names etc.

  • Rename the TEST_CLASS from UnitTest1 to MotorControllerTest and change the file name of the unit test to match (i.e. MotorControllerTest.cpp)
  • We’re going to need access to the header file for the MotorController class so add an include to the MotorControllerTest.cpp file to include the “MotorController.h” header, I’m going to simply use the following for now (i.e. I’m not going to set up the include folders in VC++)
    #include "../MotorController/MotorController.h"
    
  • We’re going to implement a couple of simple setter and getter methods to demonstrate the concepts of unit testing with Visual Studio. So to begin with let’s rename the current TEST_METHOD to getSpeed, then add another TEST_METHOD named getDirection, so your code should like like this
    TEST_CLASS(MotorControllerTest)
    {
    public:
       TEST_METHOD(getSpeed)
       {
       }
    
       TEST_METHOD(getDirection)
       {
       }    
    };
    
  • Now if we run these tests we’ll see our newly named class and two test methods are green, as we’ve not implement the code this might be a little off putting so you can always insert the Assert::Fail line into your unit test method until it’s implemented, for example
    TEST_METHOD(getSpeed)
    {
       Assert::Fail();
    }
    

    If you now run your tests (assuming you placed the Assert::Fail into your methods) they will both fail, which is as expected until such time as we implement the code to make them pass.

  • To save going through each step in creating the code, I’ll now supply the unit test code for the final tests
    TEST_CLASS(MotorControllerTest)
    {
    public:
    		
       TEST_METHOD(getSpeed)
       {
          CMotorController motor;
          motor.setSpeed(123);
    
          Assert::AreEqual(123, motor.getSpeed());
       }
    
       TEST_METHOD(getDirection)
       {
          CMotorController motor;
          motor.setDirection(Forward);
    
          Assert::AreEqual(Forward, motor.getDirection());
       }    
    };
    
  • Next let’s implement some code in the MotorController.h and MotorController.cpp
    // MotorController.h
    
    enum Direction
    {
        Forward,
        Reverse
    };
    
    // This class is exported from the MotorController.dll
    class MOTORCONTROLLER_API CMotorController {
    private:
        int speed;
        Direction direction;
    public:
    	CMotorController(void);
    
        void setSpeed(int speed);
        int getSpeed();
    
        void setDirection(Direction direction);
        Direction getDirection();
    };
    
    

    and

    // MotorController.cpp
    
    CMotorController::CMotorController()
    {
    }
    
    void CMotorController::setSpeed(int speed)
    {
        this->speed = speed;
    }
    
    int CMotorController::getSpeed()
    {
        return speed;
    }
    
    void CMotorController::setDirection(Direction direction)
    {
        this->direction = direction;
    }
    
    Direction CMotorController::getDirection()
    {
        return direction;
    }
    
  • If you run these tests you’ll find a compiler error, something along the lines of

    Error 1 error C2338: Test writer must define specialization of ToString for your class class std::basic_string,class std::allocator > __cdecl Microsoft::VisualStudio::CppUnitTestFramework::ToString(const enum Direction &). c:\program files (x86)\microsoft visual studio 11.0\vc\unittest\include\cppunittestassert.h 66 1 MotorControllerTests

    The problem here is that we’ve introduced a type which we have no ToString method for within the CppUnitTestAssert.h header, so we need to add one. Simply insert the following code before your TEST_CLASS

    namespace Microsoft{ namespace VisualStudio {namespace CppUnitTestFramework 
    {
        template<> static std::wstring ToString<Direction>(const Direction& direction) 
        { 
           return direction == Forward ? L"F" : L"R"; 
        };
    }}}
    

    The concatenation of the namespace on a single line is obviously not neccesary, I just copied the way the CppUnitTestAssert.h file had their namespace and also it ensures I can easily show you the main code for this. What does matter though is that we’ve implemented a new ToString which understands the Direction type/enum.

  • Finally, run the tests and see what the outcome is – both tests should pass, feel free to break the getter code to prove the SUT is really being tested

That should be enough to get your unit testing in VC++ up and running.

Introduction to using Pex with Microsoft Code Digger

This post is specific to the Code Digger Add-In, which can be used with Visual Studio 2012 and 2013.

Requirements

This will appear in Tools | Extensions and Updates and ofcourse can be downloaded via this dialog.

What is Pex ?

So Pex is a tool for automatically generating test suites. Pex will generate input-output values for your methods by analysing the flow etc. and arguments required by the method.

What is Code Digger ?

Code Digger supplies an add-in for Visual Studio which allows us to select a method and generate input/outputs using Plex and display the results within Visual Studio.

Let’s use Code Digger

Enough talk, let’s write some code and try it out.

Create a new solution, I’m going to create a “standard” class library project. Older versions of Code Digger only worked with PCL’s but now (I’m using 0.95.4) you can go to Tools | Options in Visual Studio, select Pex’s General option and change DisableCodeDiggerPortableClassLibraryRestriction to True (if it’s not already set to this) and run Pex against non-PCL code.

Let’s start with a very simple class and a few methods

public static class Statistics
{
   public static double Mean(double[] values)
   {
      return values.Average();
   }

   public static double Median(double[] values)
   {
      Array.Sort(values);

      int mid = values.Length / 2;
      return (values.Length % 2 == 0) ?
         (values[mid - 1] + values[mid]) / 2 :
         values[mid];
   }

   public static double[] Mode(double[] values)
   {
      var grouped = values.GroupBy(v => v).OrderBy(g => g.Count());
      int max = grouped.Max(g => g.Count());
			
      return (max <= 1) ?
         new double[0] :
         grouped.Where(g => g.Count() == max).Select(g => g.Key).ToArray();
      }
   }
}

Now you may have noticed we do not check for the “values” array being null or empty. This is on purpose, to demonstrate Pex detecting possible failures.

Now, we’ll use the Code Digger add-in.

Right mouse click on a method, let’s take the Mean method to begin with, and select Generate Inputs / Outputs Table. Pex will run and create a list of inputs and outputs. In my code for Mean, I get two failures. Pex has executed my method with a null input and an empty array, both cases are not handled (as mentioned previously) by my Mean code.

If you now try the other methods you should see more similar failures but hopefully more successes with more input values.

Unfortunately (at the time of writing at least) there doesn’t appear to be an option in Code Digger to generate either unit tests automatically or save the inputs for my own unit tests. So for now you’ll have to manually write your tests with the failing inputs and implement code to make those work.

Note: I did find at one time the Generate Inputs / Outputs Table menu option missing, I disable and re-enabled the Code Digger Add-In and restarted Visual Studio and it reappeared.