Category Archives: NUnit

Unit testing your MAUI project

Note: I found this didn’t work correctly on Visual Studio for Mac, I’ll update the post further if I do get it working.

This post is pretty much a duplicate of Adding xUnit Test to your .NET MAUI Project but just simplified for me to quickly repeat the steps which allow me to write unit tests against my MAUI project code.

So, you want to unit test some code in your MAUI project. It’s not quite as simple as just creating a test project then referencing the MAUI project. Here are the steps to create an NUnit test project with .NET 7 (as my MAUI project has been updated to .NET 7).

  • Add a new NUnit Test Project to the solution via right mouse click on the solution, Add | Project and select NUnit Test Project
  • Open the MAUI project (csproj) file and prepend the net7.0 to the TargetFrameworks so it looks like this
    <TargetFrameworks>net7.0;net7.0-android;net7.0-ios;net7.0-maccatalyst</TargetFrameworks>
    
  • Replace <OutputType>Exe</OutputType> with the following
    <OutputType Condition="'$(TargetFramework)' != 'net7.0'">Exe</OutputType>
    

    Yes, this is TargetFramework singular. You may need to reload the project.

  • Now in your unit test project you can reference this MAUI project and write your tests

So, why did we carry out these steps?

Our test project was targeting .NET 7.0 but our MAUI project was targeting different platform implementations of the .NET 7 frameworks, i.e those for Android etc. We need the MAUI project to build a .NET 7.0 compatible version hence added the net7.0 to the TargetFrameworks.

The change to add the OutputType ensures that we only build an EXE output for those other frameworks, and therefore for .NET 7.0 we’ll have a DLL to reference instead in our tests.

Now we can build and run our unit tests.

NUnit’s TestCaseSourceAttribue

When we use the TestCaseAttribute with NUnit tests, we can define the parameters to be passed to a unit test, for example

[TestCase(1, 2, 3)]
[TestCase(6, 3, 9)]
public void Sum_EnsureValuesAddCorrectly1(double a, double b, double result)
{
   Assert.AreEqual(result, a + b);
}

Note: In a previous release the TestCaseAttribute also had Result property, but this doesn’t seem to be the case now, so we’ll expect the result in the parameter list.

This is great, but what if we want our data to come from a dynamic source. We obviously cannot do this with the attributes, but we could using the TestCaseSourceAttribute.

In it’s simplest form we could rewrite the above test like this

[Test, TestCaseSource(nameof(Parameters))]
public void Sum_EnsureValuesAddCorrectly(double a, double b, double result)
{
   Assert.AreEqual(result, a + b);
}

private static double[][] Parameters =
{
   new double[] { 1, 2, 3 },
   new double[] { 6, 3, 9 }
};

an alternative to the above is to return TestCaseData object, as follows

[Test, TestCaseSource(nameof(TestData))]
public double Sum_EnsureValuesAddCorrectly(double a, double b)
{
   return a + b;
}

private static TestCaseData[] TestData =
{
   new TestCaseData(1, 2).Returns(3),
   new TestCaseData(6, 3).Returns(9)
};

Note: In both cases, the TestCaseSourceAttribute expects a static property or method to supply the data for our test.

The property which returns the array (above) doesn’t need to be in the Test class, we could have a separate class, such as

[Test, TestCaseSource(typeof(TestDataClass), nameof(TestData))]
public double Sum_EnsureValuesAddCorrectly(double a, double b)
{
   return a + b;
}

class TestDataClass
{
   public static IEnumerable TestData
   {
      get
      {
         yield return new TestCaseData(1, 2).Returns(3);
         yield return new TestCaseData(6, 3).Returns(9);
      }
   }
}

Extended or unit test capabilities using the TestCaseSource

If we take a look at NBench Performance Testing – NUnit and ReSharper Integration we can see how to extend our test capabilities using NUnit to run our extensions. i.e. with NBench we want to create unit tests to run performance tests within the same NUnit set of tests (or separately but via the same test runners).

I’m going to recreate a similar set of features for a more simplistic performance test.

Note: this code is solely to show how we can create a similar piece of testing functionality, it’s not mean’t to be compared to NBench in any way, plus NUnit also has a MaxTimeAttribute which would be sufficient for most timing/performance tests.

Let’s start by creating an attribute which will use to detect methods which should be performance tested. Here’s the code for the attribute

[AttributeUsage(AttributeTargets.Method)]
public class PerformanceAttribute : Attribute
{
   public PerformanceAttribute(int max)
   {
      Max = max;
   }

   public int Max { get; set; }
}

The Max property defines a max time (in ms) that a test method should take. If it takes longer than the Max value, we expect a failing test.

Let’s quickly create some tests to show how this might be used

public class TestPerf : PerformanceTestRunner<TestPerf>
{
   [Performance(100)]
   public void Fast_ShouldPass()
   {
      // simulate a 50ms method call
      Task.Delay(50).Wait();
   }

   [Performance(100)]
   public void Slow_ShouldFail()
   {
      // simulate a slow 10000ms method call
      Task.Delay(10000).Wait();
   }
}

Notice we’re not actually marking the class as a TestFixture or the methods as Tests, as the base class PerformanceTestRunner will create the TestCaseData for us and therefore the test methods (as such).

So let’s look at that base class

public abstract class PerformanceTestRunner<T>
{
   [TestCaseSource(nameof(Run))]
   public void TestRunner(MethodInfo method, int max)
   {
      var sw = new Stopwatch();
      sw.Start();
      method.Invoke(this, 
         BindingFlags.Instance | BindingFlags.InvokeMethod, 
         null, 
         null, 
         null);
      sw.Stop();

      Assert.True(
         sw.ElapsedMilliseconds <= max, 
         method.Name + " took " + sw.ElapsedMilliseconds
      );
   }

   public static IEnumerable Run()
   {
      var methods = typeof(T)
         .GetMethods(BindingFlags.Public | BindingFlags.Instance);
      
      foreach (var m in methods)
      {
         var a = (PerformanceAttribute)m.GetCustomAttribute(typeof(PerformanceAttribute));
         if (a != null)
         {
            yield return 
               new TestCaseData(m, a.Max)
                     .SetName(m.Name);
         }
      }
   }
}

Note: We’re using a method Run to supply TestCaseData. This must be public as it needs to be accessible to NUnit. Also we use SetName on the TestCaseData passing the method’s name, hence we’ll see the method as the test name, not the TestRunner method which actually runs the test.

This is a quick and dirty example, which basically locates each method with a PerformanceAttribute and yields this to allow the TestRunner method to run the test method. It simply uses a stopwatch to check how long the test method took to run and compares with the setting for Max in the PerformanceAttribute. If the time to run the test method is less than or equal to Max, then the test passed, otherwise it fails with a message.

When run via a test runner you should see a node in the tree view showing TestPerf, with a child of PerformanceTestRunner.TestRunner, then child nodes below this for each TestCaseData ran against the TestRunner, we’ll see the method names Fast_ShouldPass and Slow_ShouldFail – and that’s it, we’ve reused NUnit, the NUnit runners (such as ReSharper etc.) and created a new testing capability, the Performance test.