Monthly Archives: September 2017

Azure Table Storage

Table storage is a schema-less data store, so has some familiarity to No-SQL databases.

For this post I’m just going to cover the code snippets for the basic CRUD type operations.

Creating a table

We can create a table within table storage using the following code.

At this point the table is empty of both data and ofcourse, being schema-less it has no form, i.e. it’s really just an empty container at this point.

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
      "StorageConnectionString"));

var client = storageAccount.CreateCloudTableClient();
var table = client.GetTableReference("plants");

table.CreateIfNotExists();

Deleting a table

Deleting a table is as simple as this

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
      "StorageConnectionString"));

var client = storageAccount.CreateCloudTableClient();
var table = client.GetTableReference("mytable");

table.Delete();

Entities

Using the Azure table API we need to implement the ITableEntity interface or derive our entities from the TableEntity class. For example

class Plant : TableEntity
{
   public Plant()
   {
   }

   public Plant(string type, string species)
   {
      PartitionKey = type;
      RowKey = species;
   }

   public string Comment { get; set; }
}

In this simple example we map the type of plant to a ParitionKey and the species to the RowKey, obviously you might prefer using Guid’s or other ways of keying into your data. The thing to remember is that, the ParitionKey/RowKey must be unique to the table. Obviously the example above is not going to make code very readable so it’s more likely that we’d also declare properties with more apt names, such as Type and Species, but it was meant to be a quick and simple piece of code.

Writing an entity to table storage

Writing of entities (and many other entity operations) is handled by the Execute method on the table. Which operation we use is determined by the TableOperation passed as a parameter to the Execute method.

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
      "StorageConnectionString"));

var client = storageAccount.CreateCloudTableClient();
var table = client.GetTableReference("plants");

table.CreateIfNotExists();

var p = new Plant("Flower", "Rose")
{
   Comment = "Watch out for thorns"
};

table.Execute(TableOperation.Insert(p));

This will throw an exception if we already have an entity with the PartitionKey/RowKey combination in the table storage. So we might prefer to tell the table storage to insert or update…

Updating entities within table storage

If we prefer to handle both insertion OR updating within a single call we can use the TableOperation.InsertOrReplace method

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
      "StorageConnectionString"));

var client = storageAccount.CreateCloudTableClient();
var table = client.GetTableReference("plants");

var p = new Plant("Flower", "Rose")
{
   Comment = "Thorns along the stem"
};

table.Execute(TableOperation.InsertOrReplace(p));

There’s also a TableOperation.InsertOrMerge which in essence merges new properties (if new one’s exist) onto an existing entity if the entity already exists.

Retrieving entities from table storage

Retrieving an entity by it’s ParitionKey/RowKey is accomplished using the TableOperation Retrieve.

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
      "StorageConnectionString"));

var client = storageAccount.CreateCloudTableClient();
var table = client.GetTableReference("plants");

var entity = (Plant)table.Execute(
   TableOperation.Retrieve<Plant>(
      "Flower", "Rose")).Result;

Console.WriteLine(entity.Comment);

Deleting an entity from table storage

Deleting an entity is a two stage process, first we need to get the entity and then we can pass this to the Execute method with the TableOperation.Delete and the entity will be removed from the table storage.

Note: obviously I’ve not included error handling in this or other code snippets. Particularly here where a valid entity may not be found.

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
      "StorageConnectionString"));

var client = storageAccount.CreateCloudTableClient();
var table = client.GetTableReference("plants");

var entity = (Plant)table.Execute(
   TableOperation.Retrieve<Plant>(
      "Flower", "Crocus")).Result;
table.Execute(TableOperation.Delete(entity));

Query Projections

In cases where, maybe our data has many properties (for example), we might prefer to query against our data and use projection capabilities to reduce those properties retrieved. To do this we use the TableQuery. For example let’s say all we’re after is the Comment from our entities, then we could write the following

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
   "StorageConnectionString"));

var client = storageAccount.CreateCloudTableClient();
var table = client.GetTableReference("plants");

var projectionQuery = new TableQuery<DynamicTableEntity>()
   .Select(new [] { "Comment" });

EntityResolver<string> resolver = 
   (paritionKey, rowKey, timeStamp, properties, etag) => 
      properties.ContainsKey("Comment") ? 
      properties["Comment"].StringValue : 
      null;

foreach (var comment in table.ExecuteQuery(projectionQuery, resolver))
{
   Console.WriteLine(comment);
}

The TableQuery line is where we create the projection, i.e. what properties we want to retrieve. In this case we’re only interested in the “Comment” property. But we could add other properties (excluding the standard ParitionKey, RowKey, and Timestamp properties as these will be retrieved anyway).

The next line is the resolver which is passed to ExecuteQuery along with the projectionQuery. This is basically a predicate which acts as the “custom deserialization logic”. See Windows Azure Storage Client Library 2.0 Tables Deep Dive. Whilst an old article, it’s still very relevant. Ofcourse, the example above, shows using an anonymous delegate, in situations where we’re doing a lot of these sorts of projection queries we’d just create a method for this and pass that into ExecuteQuery as the resolver.

Querying using LINQ

Whilst LINQ is supported for querying table storage data, at the time of writing, it’s a little limited or requires you to write your queries in a specific way.

Let’s first look at a valid LINQ query against our plant table

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
   "StorageConnectionString"));

var client = storageAccount.CreateCloudTableClient();
var table = client.GetTableReference("plants");

var query = from entity in table.CreateQuery<Plant>()
   where entity.Comment == "Thorns along the stem"
   select entity;

foreach (var r in query)
{
   Console.WriteLine(r.RowKey);
}

In this example we’ll query the table storage for any plant’s with a Comment “Thorns along the stem”, but now if we were to try to query for a Comment which contains the word “Thorns”, like this

var query = from entity in table.CreateQuery<Plant>()
   where entity.Comment.Contains("Thorns along the stem")
   select entity;

Sadly we’ll get a (501) Not Implemented back from the table storage service. So there’s obviously a limit to how we query our table storage data, which is fair enough. Obviously if we want more complex query capabilities we’d probably be best served using a different data store.

We can also use projections on our query, i.e.

var query = from entity in table.CreateQuery<Plant>()
   where entity.Comment == "Thorns along the stem"
   select entity.RowKey;

or using anonymous types, such as

var query = from entity in table.CreateQuery<Plant>()
   where entity.Comment == "Thorns along the stem"
   select new
   {
      entity.RowKey,
      entity.Comment
   };

Using Azure Queues

Azure Queue sits inside your Azure storage and allows messages to flow through a queue system (similar in some ways, but not as fully featured as MSMQ, TIBCO etc.).

Creating a Queue

Obviously we can create a queue using the Azure Portal or Azure Storage Explorer, but let’s create queue via code, using the following

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
      "StorageConnectionString"));

var client = storageAccount.CreateCloudQueueClient();
var queue = client.GetQueueReference("my-queue");
queue.CreateIfNotExists();

Sending a message to our Queue

Adding a message to the queue is as simple as, the following

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
      "StorageConnectionString"));

var client = storageAccount.CreateCloudQueueClient();
var queue = client.GetQueueReference("my-queue");

var message = new CloudQueueMessage("Hello World");
queue.AddMessage(message);

Peeking at a Queue

We can peek at a message on the queue, which basically means we can look at the message without affect it, using

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
      "StorageConnectionString"));

var client = storageAccount.CreateCloudQueueClient();
var queue = client.GetQueueReference("my-queue");

var message = queue.PeekMessage();
Console.WriteLine(message.AsString);

As the name suggests, in this instance we’re peeking at the current or at least, most recent message. But we can also use the PeekMessages method to enumerate over a number of messages on the queue.

Here’s an example of peeking at 32 messages (it appears 32 is the maximum number of messages we’re allowed to peek, currently anything above this causes a bad request exception)

foreach (var message in queue.PeekMessages(32))
{
   Console.WriteLine(message.AsString);
}

Getting messages

Unlike, for example TIBCO RV, Azure Queue’s do not have subscribers and therefore do not push messages to subscribers (like and event might). Once a message is de-queued it will be marked as invisible (see CloudQueue.GetMessage Method.

To de-queue a message we use GetMessage on a Queue. As one might expect, once the message is marked as invisible, subsequent calls to GetMessage will not return the hidden message until the visibility timeout is reached and the message will then reappear and be available again from subsequent GetMessage calls.

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
      "StorageConnectionString"));

var client = storageAccount.CreateCloudQueueClient();
var queue = client.GetQueueReference("my-queue");

var message = queue.GetMessage();

Console.WriteLine(message.AsString);

Now if you change the var message = queue.GetMessage(); line to the following

var message = queue.GetMessage(TimeSpan.FromSeconds(10));

and then within the Azure Portal or Azure Storage explorer refresh immediately after it’s de-queued the message will disappear but then refreshing again after 10 seconds and the message will reappear in the queue with it’s dequeue count incremented.

Like PeekMessages, we can call GetMessages to get a batch of messages (between 1 and 32 messages).

Deleting a message

To remove a message altogether use

queue.DeleteMessage(message);

This would usually be called after GetMessage is called, but obviously this is dependent upon your requirements. It might be called after a certain dequeuer count or simply after every GetMessage call, but remember if you do not delete the message it will reappear on your queue until an it’s maximum time to live ends, as supplied via the AddMessage method.

Using Azure File Storage

File storage is pretty much what is says in the tin. It’s a shared access file system using the SMB protocol. You can create directories, subdirectories and store files in those directories – yes, it’s a file system.

Reading a file

Using the Azure Portal either locate or create a storage account, within this create a File Storage, then create a share. Within the share upload a file, mine’s the Hello World.txt file with those immortal words Hello World within it.

Let’s read this file from our client. In many ways the client API is very similar to that used for Blob storage (as one might expect).

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
      "StorageConnectionString"));

var fileClient = storageAccount.CreateCloudFileClient();

var share = fileClient.GetShareReference("myfiles");
var root = share.GetRootDirectoryReference();
var file = root.GetFileReference("Hello World.txt");

var contents = file.DownloadText();

Uploading a file

We can upload a file using the UploadFromFile. In this example we’ll just upload to the root folder of the myfiles share

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
      "StorageConnectionString"));

var fileClient = storageAccount.CreateCloudFileClient();

var share = fileClient.GetShareReference("myfiles");
var root = share.GetRootDirectoryReference();
var file = root.GetFileReference("Hello World.txt");
file.UploadFromFile("Hello World.txt");

Deleting a file

Deleting files is as simple as, the following

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
      "StorageConnectionString"));

var fileClient = storageAccount.CreateCloudFileClient();

var share = fileClient.GetShareReference("myfiles");
var root = share.GetRootDirectoryReference();
var file = root.GetFileReference("Hello World.txt");
file.Delete();

References

Introduction to Azure File storage
File Service REST API

Serilog revisited (now version 2.5)

About a year ago, I wrote the post Structured logging with Serilog which covered the basics of using Serilog 2.0. I’ve just revisited this post and found Serilog 2.5 has changed things a little.

I’m not going to go over what Serilog does etc. but instead just list the same code from my original post, but working with the latest NuGet packages.

So install the following packages into your application using NuGet

  • Serilog
  • Serilog.Sinks.RollingFile
  • Serilog.Sinks.Console

Serilog.Sinks.Console is used instead of Serilog.Sinks.Literate now, which has been deprecated.

Here’s the code. The only real change is around the JsonFormatter

Log.Logger = new LoggerConfiguration()
   .WriteTo.Console()
   .WriteTo.RollingFile(
      new JsonFormatter(renderMessage: true), 
      "logs\\sample-{Date}.txt")
   .MinimumLevel.Verbose()
   .CreateLogger();

Log.Logger.Information("Application Started");

for (var i = 0; i < 10; i++)
{
   Log.Logger.Information("Iteration {I}", i);
}

Log.Logger.Information("Exiting Application");

Now, let’s add something new to this post, add the following NuGet package

  • Serilog.Settings.AppSettings

In my original post I pointed out that Serilog seemed to be aimed towards configuration through code, but the Serilog.Settings.AppSettings package allows us to use the App.config for our configuration.

Change you Log.Logger code to the following

Log.Logger = new LoggerConfiguration()
   .ReadFrom.AppSettings()
   .CreateLogger();

and now in your App.config, within the configuration section, put the following

<appSettings>
   <add key="serilog:minimum-level" 
        value="Verbose" />
   <add key="serilog:using:RollingFile" 
        value="Serilog.Sinks.RollingFile" />
   <add key="serilog:write-to:RollingFile.pathFormat"
        value="logs\\sample-{Date}.txt" />
   <add key="serilog:write-to:RollingFile.formatter" 
        value="Serilog.Formatting.Json.JsonFormatter" />
   <add key="serilog:using:Console" 
        value="Serilog.Sinks.Console" />
   <add key="serilog:write-to:Console" />
</appSettings>

This recreates our original code by outputting to both a rolling file and console output, bad sadly does not allow us to set the renderMessage parameter of the JsonFormatter.

Note: At the time of writing it appears there’s no way to set this renderMessage, see How to set formatProvider property in Serilog from app.config file.

Using Azure Blob Storage

Blob storage has the concept of containers (which can be thought of as directories) and those containers can contain BLOB’s. Containers cannot contain containers and hence differ from a file system in structure.

Containers can be private or allow anonymous read access to BLOBs only or allow anonymous read access for containers and BLOBs.

In the Azure Portal, if you haven’t already got one set up, create a storage account then create a Blob service. Next create a container, mine’s named test. Next, upload a file, I’ve uploaded Hello World.txt which as you imagine simply has the line Hello World within it.

When you create a Blob service you’re assigned an endpoint name, along the lines of https://<storage account>.blob.core.winodws.net/

When we add containers, these get the URL https://<storage account>.blob.core.windows.net/<container name> and files get the URL https://<storage account>.blob.core.windows.net/<container name>/<file name>

A comprehensive document on using .NET to interact with the Blob storage can be found at Get started with Azure Blob storage using .NET.

Reading a Blob

Here’s some code to read our uploaded Hello World.txt file

Firstly we need to use NuGet to add package WindowsAzure.Storage and Microsoft.WindowsAzure.ConfigurationManager.

using Microsoft.Azure;
using Microsoft.WindowsAzure.Storage;

private static void ReadBlob()
{
   var storageAccount = CloudStorageAccount.Parse(
      CloudConfigurationManager.GetSetting(
         "StorageConnectionString"));

   var blobClient = storageAccount.CreateCloudBlobClient();
   var container = blobClient.GetContainerReference("test");
   var blob = container.GetBlockBlobReference("Hello World.txt");
   var contents = blob.DownloadText();

   Console.WriteLine(contents);
}

In the example our file is a text file, but we can also access the blob as a stream using DownloadToStream (plus there’s a whole bunch of other methods for accessing the blobs).

Writing a Blob

We can write blobs pretty easily also

public static void WriteBlob()
{
   var storageAccount = CloudStorageAccount.Parse(
      CloudConfigurationManager.GetSetting(
         "StorageConnectionString"));

   var blobClient = storageAccount.CreateCloudBlobClient();

   var container = blobClient.GetContainerReference("test");

   var blob = container.GetBlockBlobReference("new.txt");
   blob.UploadFromFile("new.txt");
}

In this example, as you’ll see, we still do the standard, connect to cloud via the blob client, get a reference to the container we want to interact with, but next we get a blob to a file (in this case new.txt didn’t exist) and then upload or write from a stream to blob storage. If “new.txt” does exist in the blob storage it’ll simply be overwritten.

Deleting a Blob

We’ve looked at Creation/Update of blobs and Retrieval or them so let’s complete CRUD operations on blobs with delete

public static void DeleteBlob()
{
   var storageAccount = CloudStorageAccount.Parse(
      CloudConfigurationManager.GetSetting(
         "StorageConnectionString"));

   var blobClient = storageAccount.CreateCloudBlobClient();

   var container = blobClient.GetContainerReference("test");

   var blob = container.GetBlockBlobReference("new.txt");
   blob.DeleteIfExists();
}

References

Along with the standard CRUD type operations we can carry out actions on containers, list blobs etc.

See Get started with Azure Blob storage using .NET for more information.

The Blob Service REST API lists the REST API’s if you’d prefer to bypass the Microsoft libraries for accessing the Blobs (or need to implement in a different language).

Azure Storage

Blobs, Files, Tables and Queues – with Azure Cosmos DB in preview (at the time of writing), these are the currently supported Storage account options in Azure. Ofcourse, along side the storage account options are SQL databases which I’ll cover in a post another time.

Usually when you create an application within Azure’s portal, you’ll also have an App Service plan created (with your deployment location setup) and a Storage Account will be setup automatically for you.

Note: You can view your storage account details via the Azure portal or using the Windows application, Storage Explorer, which also allows access to local storage accounts for development/testing.

Blobs

The Blob container is used internally by Azure applications to store information regarding webjobs (and probably more, although I’ve not yet experience of all options), but for us developers and our applications (or just for used a plain old storage) we can use it to store any types of file/binary data. The difference between the Blob storage and file is more down to the way things are stored.

Within Blob storage we create containers (such as Azure creates for our applications/connections) and we can make a container private, blob anonymous read (for blob read access, i.e. publicly access individual blobs) and container anonymous read (for container and blob reads, which allows us to publicly view containers and the blobs within).

Files

Unsurprisingly, given that Blob storage acts a little like a file system, Azure also includes a File storage mechanism which one can look at like a file share (which uses SMB 3.0 protocol at the time of writing). Again this is used internally by Azure applications for log file storage. There’s little more to say except that file storage allows us to create directories and files or subdirectories as one would expect. We can also set quota on our shares to ensure the total size of files on the share doesn’t exceed a threshold (in GB’s, currently there’s a limit of 5120 GB).

Tables

Table storage allows us to work with entity storage. This is more of a No SQL (or key/value storage) offering than the SQL databases options within Azure. We’re definitely not talking relational databases here, but instead we have a fast mechanism for storing application data. Again, Azure already uses table storage within our storage account for internal data.

Check out this post Working with 154 million records on Azure Table Storage – the story of “Have I been pwned?” on performance within Table storage, Troy Hunt’s storing far more than I currently have within Table Storage.

Table storage stores entities whereby each entity stores key/value pairs representing the property name and the property value (as one would expect). Along with the entity’s properties we also need to define three system properties, ParitionKey (a string which identifies the partition an entity belongs to), a RowKey (a string unique identifier for the entity within the partition) and a Timestamp (a DateTime which indicates when a the entity was last modified).

Note: An entity, according to Get started with Azure Table storage using .NET can be upto 1MB in size and can have a maximum of 252 properties.

In C# terms we can simply derive our entity object from the Microsoft.WindowsAzure.Storage.Table.TableEntity which will supply the required properties. For example

using Microsoft.WindowsAzure.Storage.Table;

public class PersonEntity : TableEntity
{
   public PersonEntity()
   {
      // We need to expose a default ctor
   }

   public PersonEntity(string firstName, string lastName)
   {
      PartitionKey = lastName;
      RowKey = firstName;
   }

   public int Age { get; set; }
}

In the above we’ve declared the last name as the ParitionKey and the RowKey as the first name, obviously a better strategy will be required in production systems, see Designing a Scalable Partitioning Strategy for Azure Table Storage and Azure Storage Table Design Guide: Designing Scalable and Performant Tables for more information on partition strategy keys.

For more information on developing C#/.NET code for Table Storage, check out Get started with Azure Table storage using .NET.

Queues

Not as fully featured as MSMQ, but Queues offer a message queue service. This feature allows us to communicate between two separate applications and/or Azure functions to put together a composite application.

Let’s go to Azure functions and create a C# QueueTrigger function, the default code will simply log the queue’s message and looks like this

using System;

public static void Run(string myQueueItem, TraceWriter log)
{
    log.Info($"C# Queue trigger function processed: {myQueueItem}");
}

You’ll need to set the Queue name to the name of the queue you’ll be sending messages on, so let’s call ours test-queue.

Now Run the function.

From Storage explorer or from another instance of the Azure portal (as we want to watch the Azure function running) create a Queue named test-queue and that’s basically it.

Now Add a message to the queue and watch your Azure function log. It should display your message. After a message has been received it’s automatically removed from the queue.

Cosmos DB

At the time of writing Azure Cosmos DB is in preview. So anything I mention here should be by changed by the time it’s out of preview.

I won’t go through ever capability of Cosmos DB, you can check out the Welcome to Azure Cosmos DB link for that, but some of the key capabilities that interest me the most include distributed data across regions, uses “standard” API’s including MongoDB and Table API and well as tunable consistency.

Recording HTTP interactions with Scotch

Scotch is an HTTP recorder and playback library. What this means is that, whether we’re writing unit tests which require an HTTP connection or content, or if we need to test and application offline, we can first record a session, running code against our HttpClient and record the results of any calls, then take our application offline and replay those interactions.

Let’s get started…

Add the NuGet package scotch. If you’re using C# this will also deploy some F# assemblies, don’t panic, as it’s .NET it’s runnable from C# and luckily the API uses C# design style and hence looks like any other C#/.NET framework library.

This example, shows how we might have code which connects to the scotch GitHub page and using ScotchMoe.Recording, we record any interactios to the httpClient and save these to the data.json file – this example shows using the HttpClients.NewHttpClientWithHandler if we need to go through a proxy server, otherwise we just use HttpClients.NewHttpClient

var proxy = new WebProxy(proxy, port)
{
   Credentials = CredentialCache.DefaultCredentials
};
var httpHandler = new HttpClientHandler
{
   Proxy = proxy
};

var httpClient = HttpClients.NewHttpClientWithHandler(
   httpHandler, 
   @"c:\Development\interactions\data.json", 
   ScotchMode.Recording);

var result = await httpClient.GetAsync(
   new Uri("https://github.com/mleech/scotch"));

Now, assuming we’ve saved a recording we can switch the code to ScotchMode.Replaying (this example demonstrates non-proxy code and hence uses HttpClients.NewHttpClient

var httpClient = HttpClients.NewHttpClient(
   @"c:\Development\interactions\data.json", 
   ScotchMode.Replaying);

var result = await httpClient.GetAsync(
   new Uri("https://github.com/mleech/scotch"));

Adventures in UWP – Prism (more Unity than Prism)

Prism is a well used framework in WPF and it’s also available in UWP. So let’s try it out. We’re going to start by looking at switching the default generated (blank) UWP application into one which uses Prism or more specifically Unity and Prism’s navigation functionality.

  • Create a Blank UWP application
  • From Nuget install the package Prism.Unity (I’m using v6.3.0)
  • Edit App.xaml.cs and change the base class from Application to PrismUnityApplication (yes we’ll use Unity for this test), don’t forget to add using Prism.Unity.Windows;
  • Remove all generated code from App.xaml.cs except for the constructor
  • Open App.xaml and change it from using Application to PrismApplication, for example
    <windows:PrismUnityApplication
        x:Class="MathsTest.App"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        xmlns:local="using:MathsTest"
        xmlns:windows="using:Prism.Unity.Windows"
        RequestedTheme="Light">
    
    </windows:PrismUnityApplication>
    
  • Add the following bare bones method (yes it’s not handling possible startup states etc. as I wanted to reduce this code to the minimum)
    protected override Task OnLaunchApplicationAsync(LaunchActivatedEventArgs args)
    {
       NavigationService.Navigate("Main", null);
       return Task.FromResult(true);
    }
    
  • By default the UWP project will have created a MainPage.xaml but Prism’s navigation code expects this to be in a Views folder, so create a Views folder in your solution and move MainPage.xaml into this Views folder.

    If you see an exception along the lines of “The page name does not have an associated type in namespace, Parameter name: pageToken” then you’ve probably not got a view with the correct name within the Views folder. Don’t forget to update the XAML and the xaml.cs files to include Views in the name space, i.e. TestApp.Views.MainPage

At this point we have UWP running from Prism.

Internal error 500, code 0x80070021 in IIS

Whilst setting up a new build server I came across the following error internal error 500, code 0x80070021 from IIS (I was setting up CruiseControl.net, but I assume this isn’t specific to that application).

It seems this is down to the handlers being locked in IIS, running the following fixed this issue

%windir%\system32\inetsrv\appcmd.exe unlock config -section:system.webServer/handlers