Category Archives: Azure

Azure function triggers

In a previous introductory post to Azure functions I mentioned that there were more than just the HttpTrigger.

The current list of triggers available (at least for C#/.NET developers) are

  • HttpTrigger – triggers a function based upon an HTTP request, like a REST service call
  • HttpTriggerWithParameters – triggers a function based upon an HTTP request, like a REST service call
  • TimerTrigger – triggers a function based upon a CRON like time interval/setup
  • QueueTrigger – triggers when a message is added to the Azure Queue Storage
  • BlobTrigger – triggers when a blob is added to the specified container
  • EventHubTrigger – triggers when a new event is received via the event hub
  • ServiceBusQueueTrigger – triggers when a message is added to the service bus queue
  • ServiceBusTopicTrigger – triggers when a message is added to the service bus topic
  • ManualTrigger – triggers when the Run button is pressed in the Azure Portal
  • Generic Webhook – triggers when a webhook request occurs
  • GitHub Webhook – triggers when a GitHub webhook request occurs

Many of these are pretty self-explanatory, but let’s have a quick look at some of them anyway.

Note: I’m not going to cover the triggers that require other services, such as queue’s, event hubs, web hooks etc. in this post as they require more than a short paragraph to set-up etc. So I’ll look to dedicate a post to each as and when I begin using them.

HttpTrigger

The HTTP Trigger is analogous to a REST like service call. i.e. when an HTTP request is received the Azure function is called, passing in the HttpRequestMessage object (and a TraceWriter for logging).

Here’s a bare bones function example.

public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log)
{
   log.Info("C# HTTP trigger function processed a request.");
   return req.CreateResponse(HttpStatusCode.OK, "Hello World");
}

The HttpTrigger supports a lot of HTTP methods, including the obvious GET, POST, DELETE along with HEAD, PATCH, PUT, OPTIONS, TRACE.

Note: Everything we can do with HttpTriggerWithParameters, i.e. changing the HTTP method type, adding a route to the function.json and adding parameters to our method to decode the route parameters, can be implemented with the HttpTrigger. At the time of writing this post, I’m not wholly sure of the benefit of the HttpTriggerWithParameters over HttpTrigger.

HttpTriggerWithParameters

An HttpTriggerWithParameters trigger is similar to an HttpTrigger except when we create one via the Azure Portal, by default we get the following code

public static HttpResponseMessage Run(HttpRequestMessage req, string name, TraceWriter log)
{
    log.Info("C# HTTP trigger function processed a request.");
    return req.CreateResponse(HttpStatusCode.OK, "Hello " + name);
}

Notice the return is not wrapped in a Task and includes the string name parameter which, via Azure Portal we need to set the HTTP method to GET. GET is not the only supported method for this function but appears that way by default. To add HTTP methods we need to go to the function’s integrate menu option and add supported HTTP post methods and parameters can be passed in via the Azure portal’s Query parameters. More interestingly though a route is automatically created for us in the associated function.json file. The route looks like this

“route”: “HttpTriggerCSharp/name/{name}”

This obviously means we can navigate to the function using https://(app-name).azurewebsites.net/api/HttpTriggerCSharp/name/{name}

Let’s add an age parameters to the Query using the Add parameter option in the portal. If we change our route to look like this (in the function.json file)

“route”: “HttpTriggerCSharp/name/{name}/{age}”

and the source to look like this

public static HttpResponseMessage Run(HttpRequestMessage req, string name, int? age, TraceWriter log)
{
    log.Info("C# HTTP trigger function processed a request.");

    // Fetching the name from the path parameter in the request URL
    return req.CreateResponse(HttpStatusCode.OK, "Hello " + name + " " + age);
}

we can now navigate to the function using https://(app-name).azurewebsites.net/api/HttpTriggerCSharp/name/{name}/{age}

TimerTrigger

As the name suggests, this trigger is dependent upon a supplied schedule (using CRON format). Obviously this is very useful in situations where we maybe run reports every day at a set time or could be used to periodically check some source (which is not supported as a trigger) such as a file location.

The code generated by the Azure Portal looks like this

public static void Run(TimerInfo myTimer, TraceWriter log)
{
    log.Info($"C# Timer trigger function executed at: {DateTime.Now}");
}

ManualTrigger

The manual trigger is a trigger which is executed via the Azure Portal’s Run button. One might view it a little like running a batch file or simply a function which you do not want others to have access to, the code generated by the portal looks like this

public static void Run(string input, TraceWriter log)
{
    log.Info($"C# manually triggered function called with input: {input}");
}

and values are supplied as input (to the string input via the Request Body input box as plain text.

Azure page “Hmmm… Looks like something went wrong” in Edge

I just tried to access my Azure account via Microsoft Edge and logged in fine but upon clicking on Portal was met with a fairly useless message

Hmmm... Looks like something went wrong

Clicking try again did nothing.

On searching the internet I found that if you press F12 in Edge to display the developer tools, the Cookies node had the following UE_Html5StorageExceeded.

Running localStorage.clear() from the developer tools console window is suggested here. Sadly this failed with an undefined response.

However Edge Crashing with HTML5 storage exceeded suggests going to Edge | Settings | Clear browsing data, clicking the button Choose what to clear and selecting Choose Cookies and saved website data followed by clicking the Clear button did work.

Oh and yes, sadly you will need to login again to various websites that you probably left logged in permanently.

Using Azure storage emulator

For all of my posts on using Azure Storage I’ve been using my online account etc. but ofcourse it’s more likely we’d normally want to test our code using an offline solution, hence it’s time to use the storage emulator.

In your App.config you need to have the following

<appSettings>
   <add key="StorageConnectionString" value="UseDevelopmentStorage=true;" />
</appSettings>

Now the following code (taken from my Azure table storage post) will use the local storage/Azure storage emulator.

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
   "StorageConnectionString"));

var client = storageAccount.CreateCloudTableClient();
var table = client.GetTableReference("plants");

table.CreateIfNotExists();

var p = new Plant("Fruit", "Apple")
{
   Comment = "Can be used to make cider",
   MonthsToPlant = "Jan-Mar"
};

table.Execute(TableOperation.InsertOrMerge(p));

Note: If not already installed, install Azure Storage Emulator from https://azure.microsoft.com/en-gb/downloads/

and Azure storage emulator will be installed in

C:\Program Files (x86)\Microsoft SDKs\Azure\Storage Emulator

Microsoft Azure Storage Explorer can be used to view the storage data on your local machine/via the emulator and when you open the Development account in the Local and Attached Storage Accounts this will run the emulator, however I had several problems when running my client code.

This firstly resulted in no data in the table storage and then a 400 bad request.

So I had to open my command prompt at C:\Program Files (x86)\Microsoft SDKs\Azure\Storage Emulator (or run from the Window run/search textbox “Microsoft Azure Storage Emulator” to get a command prompt which will already have the emulator running and offers a CLI to interact with the emulator) and from here you can check the status, start, stop and create local storage using AzureStorageEmulator.exe. All looked fine but still didn’t work. Azure storage emulator, ultimately uses a default local MSSQL DB, so time to check things on the DB side.

Using Microsoft SQL Server Management Studio I connected to (local)\MSSQLLocalDb and under databases deleted the AzureStorangeEmulatorDb52 (ofcourse Azure Storage Emulator needs to have been stopped) as I was using Azure Storage Emulator 5.2, the DB name may differ depending on the version of the emulator you’re running.

From the Azure Storage Emulator, I ran

AzureStorageEmulator init

to recreate the DB (you might need to use the /forceCreate switch to reinitialize the DB). Oddly I still got an error from this command (see below)

Cannot create database 'AzureStorageEmulatorDb52' : The login already has an account under a different user name.
Changed database context to 'AzureStorageEmulatorDb52'..
One or more initialization actions have failed. Resolve these errors before attempting to run the storage emulator again.
Error: Cannot create database 'AzureStorageEmulatorDb52' : The login already has an account under a different user name.
Changed database context to 'AzureStorageEmulatorDb52'..

However if Microsoft SQL Server Management Studio all looked to be in place. So I re-ran my code and this time it worked.

Now we can run against the emulator and work away for the internet and/or reduce traffic and therefore possible costs of accessing Azure in the cloud.

References

Use the Azure storage emulator for development and testing

Combining Serilog and Table Storage

As I’ve written posts on Serilog and Azure Storage recently, I thought I’d try to combine them both and use Azure table storage for the logging sink. Thankfully, I didn’t need to code anything myself, as this type of sink has already been written, see the AzureTableStorage sink.

If we take the basic code from Serilog revisited (now version 2.5) and Azure Table Storage and include the glue of AzureTableStorage. We get the following

  • Add NuGet Package Serilog.Sinks.AzureTableStorage
var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
   "StorageConnectionString"));

Log.Logger = new LoggerConfiguration()
   .WriteTo.AzureTableStorage(storageAccount, storageTableName: "MyApp")
   .CreateLogger();

Log.Logger.Information("Application Started");

for (var i = 0; i < 10; i++)
{
   Log.Logger.Information("Iteration {I}", i);
}

Log.Logger.Information("Exiting Application");

Without the storageTableName we’ll have the table LogEventEntity as the default table name. That’s all there is to it, now we’re sending our log entries into the cloud.

Azure Table Storage

Table storage is a schema-less data store, so has some familiarity to No-SQL databases.

For this post I’m just going to cover the code snippets for the basic CRUD type operations.

Creating a table

We can create a table within table storage using the following code.

At this point the table is empty of both data and ofcourse, being schema-less it has no form, i.e. it’s really just an empty container at this point.

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
      "StorageConnectionString"));

var client = storageAccount.CreateCloudTableClient();
var table = client.GetTableReference("plants");

table.CreateIfNotExists();

Deleting a table

Deleting a table is as simple as this

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
      "StorageConnectionString"));

var client = storageAccount.CreateCloudTableClient();
var table = client.GetTableReference("mytable");

table.Delete();

Entities

Using the Azure table API we need to implement the ITableEntity interface or derive our entities from the TableEntity class. For example

class Plant : TableEntity
{
   public Plant()
   {
   }

   public Plant(string type, string species)
   {
      PartitionKey = type;
      RowKey = species;
   }

   public string Comment { get; set; }
}

In this simple example we map the type of plant to a ParitionKey and the species to the RowKey, obviously you might prefer using Guid’s or other ways of keying into your data. The thing to remember is that, the ParitionKey/RowKey must be unique to the table. Obviously the example above is not going to make code very readable so it’s more likely that we’d also declare properties with more apt names, such as Type and Species, but it was meant to be a quick and simple piece of code.

Writing an entity to table storage

Writing of entities (and many other entity operations) is handled by the Execute method on the table. Which operation we use is determined by the TableOperation passed as a parameter to the Execute method.

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
      "StorageConnectionString"));

var client = storageAccount.CreateCloudTableClient();
var table = client.GetTableReference("plants");

table.CreateIfNotExists();

var p = new Plant("Flower", "Rose")
{
   Comment = "Watch out for thorns"
};

table.Execute(TableOperation.Insert(p));

This will throw an exception if we already have an entity with the PartitionKey/RowKey combination in the table storage. So we might prefer to tell the table storage to insert or update…

Updating entities within table storage

If we prefer to handle both insertion OR updating within a single call we can use the TableOperation.InsertOrReplace method

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
      "StorageConnectionString"));

var client = storageAccount.CreateCloudTableClient();
var table = client.GetTableReference("plants");

var p = new Plant("Flower", "Rose")
{
   Comment = "Thorns along the stem"
};

table.Execute(TableOperation.InsertOrReplace(p));

There’s also a TableOperation.InsertOrMerge which in essence merges new properties (if new one’s exist) onto an existing entity if the entity already exists.

Retrieving entities from table storage

Retrieving an entity by it’s ParitionKey/RowKey is accomplished using the TableOperation Retrieve.

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
      "StorageConnectionString"));

var client = storageAccount.CreateCloudTableClient();
var table = client.GetTableReference("plants");

var entity = (Plant)table.Execute(
   TableOperation.Retrieve<Plant>(
      "Flower", "Rose")).Result;

Console.WriteLine(entity.Comment);

Deleting an entity from table storage

Deleting an entity is a two stage process, first we need to get the entity and then we can pass this to the Execute method with the TableOperation.Delete and the entity will be removed from the table storage.

Note: obviously I’ve not included error handling in this or other code snippets. Particularly here where a valid entity may not be found.

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
      "StorageConnectionString"));

var client = storageAccount.CreateCloudTableClient();
var table = client.GetTableReference("plants");

var entity = (Plant)table.Execute(
   TableOperation.Retrieve<Plant>(
      "Flower", "Crocus")).Result;
table.Execute(TableOperation.Delete(entity));

Query Projections

In cases where, maybe our data has many properties (for example), we might prefer to query against our data and use projection capabilities to reduce those properties retrieved. To do this we use the TableQuery. For example let’s say all we’re after is the Comment from our entities, then we could write the following

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
   "StorageConnectionString"));

var client = storageAccount.CreateCloudTableClient();
var table = client.GetTableReference("plants");

var projectionQuery = new TableQuery<DynamicTableEntity>()
   .Select(new [] { "Comment" });

EntityResolver<string> resolver = 
   (paritionKey, rowKey, timeStamp, properties, etag) => 
      properties.ContainsKey("Comment") ? 
      properties["Comment"].StringValue : 
      null;

foreach (var comment in table.ExecuteQuery(projectionQuery, resolver))
{
   Console.WriteLine(comment);
}

The TableQuery line is where we create the projection, i.e. what properties we want to retrieve. In this case we’re only interested in the “Comment” property. But we could add other properties (excluding the standard ParitionKey, RowKey, and Timestamp properties as these will be retrieved anyway).

The next line is the resolver which is passed to ExecuteQuery along with the projectionQuery. This is basically a predicate which acts as the “custom deserialization logic”. See Windows Azure Storage Client Library 2.0 Tables Deep Dive. Whilst an old article, it’s still very relevant. Ofcourse, the example above, shows using an anonymous delegate, in situations where we’re doing a lot of these sorts of projection queries we’d just create a method for this and pass that into ExecuteQuery as the resolver.

Querying using LINQ

Whilst LINQ is supported for querying table storage data, at the time of writing, it’s a little limited or requires you to write your queries in a specific way.

Let’s first look at a valid LINQ query against our plant table

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
   "StorageConnectionString"));

var client = storageAccount.CreateCloudTableClient();
var table = client.GetTableReference("plants");

var query = from entity in table.CreateQuery<Plant>()
   where entity.Comment == "Thorns along the stem"
   select entity;

foreach (var r in query)
{
   Console.WriteLine(r.RowKey);
}

In this example we’ll query the table storage for any plant’s with a Comment “Thorns along the stem”, but now if we were to try to query for a Comment which contains the word “Thorns”, like this

var query = from entity in table.CreateQuery<Plant>()
   where entity.Comment.Contains("Thorns along the stem")
   select entity;

Sadly we’ll get a (501) Not Implemented back from the table storage service. So there’s obviously a limit to how we query our table storage data, which is fair enough. Obviously if we want more complex query capabilities we’d probably be best served using a different data store.

We can also use projections on our query, i.e.

var query = from entity in table.CreateQuery<Plant>()
   where entity.Comment == "Thorns along the stem"
   select entity.RowKey;

or using anonymous types, such as

var query = from entity in table.CreateQuery<Plant>()
   where entity.Comment == "Thorns along the stem"
   select new
   {
      entity.RowKey,
      entity.Comment
   };

Using Azure Queues

Azure Queue sits inside your Azure storage and allows messages to flow through a queue system (similar in some ways, but not as fully featured as MSMQ, TIBCO etc.).

Creating a Queue

Obviously we can create a queue using the Azure Portal or Azure Storage Explorer, but let’s create queue via code, using the following

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
      "StorageConnectionString"));

var client = storageAccount.CreateCloudQueueClient();
var queue = client.GetQueueReference("my-queue");
queue.CreateIfNotExists();

Sending a message to our Queue

Adding a message to the queue is as simple as, the following

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
      "StorageConnectionString"));

var client = storageAccount.CreateCloudQueueClient();
var queue = client.GetQueueReference("my-queue");

var message = new CloudQueueMessage("Hello World");
queue.AddMessage(message);

Peeking at a Queue

We can peek at a message on the queue, which basically means we can look at the message without affect it, using

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
      "StorageConnectionString"));

var client = storageAccount.CreateCloudQueueClient();
var queue = client.GetQueueReference("my-queue");

var message = queue.PeekMessage();
Console.WriteLine(message.AsString);

As the name suggests, in this instance we’re peeking at the current or at least, most recent message. But we can also use the PeekMessages method to enumerate over a number of messages on the queue.

Here’s an example of peeking at 32 messages (it appears 32 is the maximum number of messages we’re allowed to peek, currently anything above this causes a bad request exception)

foreach (var message in queue.PeekMessages(32))
{
   Console.WriteLine(message.AsString);
}

Getting messages

Unlike, for example TIBCO RV, Azure Queue’s do not have subscribers and therefore do not push messages to subscribers (like and event might). Once a message is de-queued it will be marked as invisible (see CloudQueue.GetMessage Method.

To de-queue a message we use GetMessage on a Queue. As one might expect, once the message is marked as invisible, subsequent calls to GetMessage will not return the hidden message until the visibility timeout is reached and the message will then reappear and be available again from subsequent GetMessage calls.

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
      "StorageConnectionString"));

var client = storageAccount.CreateCloudQueueClient();
var queue = client.GetQueueReference("my-queue");

var message = queue.GetMessage();

Console.WriteLine(message.AsString);

Now if you change the var message = queue.GetMessage(); line to the following

var message = queue.GetMessage(TimeSpan.FromSeconds(10));

and then within the Azure Portal or Azure Storage explorer refresh immediately after it’s de-queued the message will disappear but then refreshing again after 10 seconds and the message will reappear in the queue with it’s dequeue count incremented.

Like PeekMessages, we can call GetMessages to get a batch of messages (between 1 and 32 messages).

Deleting a message

To remove a message altogether use

queue.DeleteMessage(message);

This would usually be called after GetMessage is called, but obviously this is dependent upon your requirements. It might be called after a certain dequeuer count or simply after every GetMessage call, but remember if you do not delete the message it will reappear on your queue until an it’s maximum time to live ends, as supplied via the AddMessage method.

Using Azure File Storage

File storage is pretty much what is says in the tin. It’s a shared access file system using the SMB protocol. You can create directories, subdirectories and store files in those directories – yes, it’s a file system.

Reading a file

Using the Azure Portal either locate or create a storage account, within this create a File Storage, then create a share. Within the share upload a file, mine’s the Hello World.txt file with those immortal words Hello World within it.

Let’s read this file from our client. In many ways the client API is very similar to that used for Blob storage (as one might expect).

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
      "StorageConnectionString"));

var fileClient = storageAccount.CreateCloudFileClient();

var share = fileClient.GetShareReference("myfiles");
var root = share.GetRootDirectoryReference();
var file = root.GetFileReference("Hello World.txt");

var contents = file.DownloadText();

Uploading a file

We can upload a file using the UploadFromFile. In this example we’ll just upload to the root folder of the myfiles share

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
      "StorageConnectionString"));

var fileClient = storageAccount.CreateCloudFileClient();

var share = fileClient.GetShareReference("myfiles");
var root = share.GetRootDirectoryReference();
var file = root.GetFileReference("Hello World.txt");
file.UploadFromFile("Hello World.txt");

Deleting a file

Deleting files is as simple as, the following

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
      "StorageConnectionString"));

var fileClient = storageAccount.CreateCloudFileClient();

var share = fileClient.GetShareReference("myfiles");
var root = share.GetRootDirectoryReference();
var file = root.GetFileReference("Hello World.txt");
file.Delete();

References

Introduction to Azure File storage
File Service REST API

Using Azure Blob Storage

Blob storage has the concept of containers (which can be thought of as directories) and those containers can contain BLOB’s. Containers cannot contain containers and hence differ from a file system in structure.

Containers can be private or allow anonymous read access to BLOBs only or allow anonymous read access for containers and BLOBs.

In the Azure Portal, if you haven’t already got one set up, create a storage account then create a Blob service. Next create a container, mine’s named test. Next, upload a file, I’ve uploaded Hello World.txt which as you imagine simply has the line Hello World within it.

When you create a Blob service you’re assigned an endpoint name, along the lines of https://<storage account>.blob.core.winodws.net/

When we add containers, these get the URL https://<storage account>.blob.core.windows.net/<container name> and files get the URL https://<storage account>.blob.core.windows.net/<container name>/<file name>

A comprehensive document on using .NET to interact with the Blob storage can be found at Get started with Azure Blob storage using .NET.

Reading a Blob

Here’s some code to read our uploaded Hello World.txt file

Firstly we need to use NuGet to add package WindowsAzure.Storage and Microsoft.WindowsAzure.ConfigurationManager.

using Microsoft.Azure;
using Microsoft.WindowsAzure.Storage;

private static void ReadBlob()
{
   var storageAccount = CloudStorageAccount.Parse(
      CloudConfigurationManager.GetSetting(
         "StorageConnectionString"));

   var blobClient = storageAccount.CreateCloudBlobClient();
   var container = blobClient.GetContainerReference("test");
   var blob = container.GetBlockBlobReference("Hello World.txt");
   var contents = blob.DownloadText();

   Console.WriteLine(contents);
}

In the example our file is a text file, but we can also access the blob as a stream using DownloadToStream (plus there’s a whole bunch of other methods for accessing the blobs).

Writing a Blob

We can write blobs pretty easily also

public static void WriteBlob()
{
   var storageAccount = CloudStorageAccount.Parse(
      CloudConfigurationManager.GetSetting(
         "StorageConnectionString"));

   var blobClient = storageAccount.CreateCloudBlobClient();

   var container = blobClient.GetContainerReference("test");

   var blob = container.GetBlockBlobReference("new.txt");
   blob.UploadFromFile("new.txt");
}

In this example, as you’ll see, we still do the standard, connect to cloud via the blob client, get a reference to the container we want to interact with, but next we get a blob to a file (in this case new.txt didn’t exist) and then upload or write from a stream to blob storage. If “new.txt” does exist in the blob storage it’ll simply be overwritten.

Deleting a Blob

We’ve looked at Creation/Update of blobs and Retrieval or them so let’s complete CRUD operations on blobs with delete

public static void DeleteBlob()
{
   var storageAccount = CloudStorageAccount.Parse(
      CloudConfigurationManager.GetSetting(
         "StorageConnectionString"));

   var blobClient = storageAccount.CreateCloudBlobClient();

   var container = blobClient.GetContainerReference("test");

   var blob = container.GetBlockBlobReference("new.txt");
   blob.DeleteIfExists();
}

References

Along with the standard CRUD type operations we can carry out actions on containers, list blobs etc.

See Get started with Azure Blob storage using .NET for more information.

The Blob Service REST API lists the REST API’s if you’d prefer to bypass the Microsoft libraries for accessing the Blobs (or need to implement in a different language).

Azure Storage

Blobs, Files, Tables and Queues – with Azure Cosmos DB in preview (at the time of writing), these are the currently supported Storage account options in Azure. Ofcourse, along side the storage account options are SQL databases which I’ll cover in a post another time.

Usually when you create an application within Azure’s portal, you’ll also have an App Service plan created (with your deployment location setup) and a Storage Account will be setup automatically for you.

Note: You can view your storage account details via the Azure portal or using the Windows application, Storage Explorer, which also allows access to local storage accounts for development/testing.

Blobs

The Blob container is used internally by Azure applications to store information regarding webjobs (and probably more, although I’ve not yet experience of all options), but for us developers and our applications (or just for used a plain old storage) we can use it to store any types of file/binary data. The difference between the Blob storage and file is more down to the way things are stored.

Within Blob storage we create containers (such as Azure creates for our applications/connections) and we can make a container private, blob anonymous read (for blob read access, i.e. publicly access individual blobs) and container anonymous read (for container and blob reads, which allows us to publicly view containers and the blobs within).

Files

Unsurprisingly, given that Blob storage acts a little like a file system, Azure also includes a File storage mechanism which one can look at like a file share (which uses SMB 3.0 protocol at the time of writing). Again this is used internally by Azure applications for log file storage. There’s little more to say except that file storage allows us to create directories and files or subdirectories as one would expect. We can also set quota on our shares to ensure the total size of files on the share doesn’t exceed a threshold (in GB’s, currently there’s a limit of 5120 GB).

Tables

Table storage allows us to work with entity storage. This is more of a No SQL (or key/value storage) offering than the SQL databases options within Azure. We’re definitely not talking relational databases here, but instead we have a fast mechanism for storing application data. Again, Azure already uses table storage within our storage account for internal data.

Check out this post Working with 154 million records on Azure Table Storage – the story of “Have I been pwned?” on performance within Table storage, Troy Hunt’s storing far more than I currently have within Table Storage.

Table storage stores entities whereby each entity stores key/value pairs representing the property name and the property value (as one would expect). Along with the entity’s properties we also need to define three system properties, ParitionKey (a string which identifies the partition an entity belongs to), a RowKey (a string unique identifier for the entity within the partition) and a Timestamp (a DateTime which indicates when a the entity was last modified).

Note: An entity, according to Get started with Azure Table storage using .NET can be upto 1MB in size and can have a maximum of 252 properties.

In C# terms we can simply derive our entity object from the Microsoft.WindowsAzure.Storage.Table.TableEntity which will supply the required properties. For example

using Microsoft.WindowsAzure.Storage.Table;

public class PersonEntity : TableEntity
{
   public PersonEntity()
   {
      // We need to expose a default ctor
   }

   public PersonEntity(string firstName, string lastName)
   {
      PartitionKey = lastName;
      RowKey = firstName;
   }

   public int Age { get; set; }
}

In the above we’ve declared the last name as the ParitionKey and the RowKey as the first name, obviously a better strategy will be required in production systems, see Designing a Scalable Partitioning Strategy for Azure Table Storage and Azure Storage Table Design Guide: Designing Scalable and Performant Tables for more information on partition strategy keys.

For more information on developing C#/.NET code for Table Storage, check out Get started with Azure Table storage using .NET.

Queues

Not as fully featured as MSMQ, but Queues offer a message queue service. This feature allows us to communicate between two separate applications and/or Azure functions to put together a composite application.

Let’s go to Azure functions and create a C# QueueTrigger function, the default code will simply log the queue’s message and looks like this

using System;

public static void Run(string myQueueItem, TraceWriter log)
{
    log.Info($"C# Queue trigger function processed: {myQueueItem}");
}

You’ll need to set the Queue name to the name of the queue you’ll be sending messages on, so let’s call ours test-queue.

Now Run the function.

From Storage explorer or from another instance of the Azure portal (as we want to watch the Azure function running) create a Queue named test-queue and that’s basically it.

Now Add a message to the queue and watch your Azure function log. It should display your message. After a message has been received it’s automatically removed from the queue.

Cosmos DB

At the time of writing Azure Cosmos DB is in preview. So anything I mention here should be by changed by the time it’s out of preview.

I won’t go through ever capability of Cosmos DB, you can check out the Welcome to Azure Cosmos DB link for that, but some of the key capabilities that interest me the most include distributed data across regions, uses “standard” API’s including MongoDB and Table API and well as tunable consistency.

Writing your first Azure Function

With serverless all the rage at the moment and AWS Lambda’s and Azure Functions offering us easy routes into serverless in the cloud, I thought it about time I actually looked at what I need to do to create a simple Azure function.

Creating a function using the portal

  • Log into https://portal.azure.com/
  • From the Dashboard select New
  • Select Function App
  • Supply an App name, set your subscription, location etc. then press Create (I pinned mine the dashboard also using the Pin to dashboard checkbox)
  • After a while (it’s not instant) you’ll be met with options for your Function Apps, Functions, Proxies and Slots.
  • Click in the + of functions
  • Select HttpTrigger – C# or whatever your preferred language is, other triggers can be investigated later but this basically creates a simple REST like function, perfect for quick testing
  • Give your function a name and for now leave the Authorization level alone

I used the default name etc and got the following code generated for me by Azure’s wizard.

Note: Whilst there is some information out there on renaming Azure functions, the current portal doesn’t seem to support this, so best to get your name right first time or be prepared to copy and paste code for now into a new function.

using System.Net;

public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log)
{
    log.Info("C# HTTP trigger function processed a request.");

    // parse query parameter
    string name = req.GetQueryNameValuePairs()
        .FirstOrDefault(q => string.Compare(q.Key, "name", true) == 0)
        .Value;

    // Get request body
    dynamic data = await req.Content.ReadAsAsync<object>();

    // Set name to query string or body data
    name = name ?? data?.name;

    return name == null
        ? req.CreateResponse(HttpStatusCode.BadRequest, "Please pass a name on the query string or in the request body")
        : req.CreateResponse(HttpStatusCode.OK, "Hello " + name);
}

This is basically the Hello World of Azure functions. If you run it from the portal you’ll see the output “Hello Azure” as well as logs output. The request box allows you to edit the “name” field which is sent to the function.

By default the authorization level is Function, which basically means that a key is required to run this function. If you click the </> Get function URL link, Azure will supply the fully URL, minus the request payload and this will include the text code=abc123 where abc123 represents a much longer key.

Note: you can find your keys also via the Manage option below the Functions list on the left of the portal. Click the “Click to show” on Function Keys default.

We can remove the requirement for the key and make the function accessible to anonymous users by selecting the Integrate option under the Function name on the left list of the portal and changing Authorization level to Anonymous.

Beware that you’ll be allowing anyone to call your function with the Anonymous option which ultimately ends with your account paying for usage.

Now, once you have the URL for you function just append &name=World to it and navigate to the function via your browser (remove the code= text and don’t bother with the & for anonymous access).

Creating and deploying from Visual Studio 2017

If you have the Azure tools installed then you can create Azure functions directly in Visual Studio (I’m using 2017 for this example).

  • File | New Project
  • Select Cloud | Azure Functions (or search for it via the search box)
  • Name your project, mine’s HelloWorldFunctions
  • At this point I build the project to bring in the NuGet packages required – you can obviously do this later
  • Add a new item to the project, select Azure Function, mine’s named Echo.cs
  • Select HttpTrigger to create a REST like function
  • Leave Access rights to Function unless you want Anonymous access

When completed, we again get the default Hello Azure/World sample code. We’re not going to bother changing this at this time, but instead will look to deploy to Azure. But first, let’s test this function in Visual Studio (offline). Simply click the run button and a console window will open with a web server running mimicking Azure, mine creates makes the function available via

http://localhost:7071/api/Echo

If we run this from a web browser and add the ?name=, like this

http://localhost:7071/api/Echo?name=Mark

we’ll get “Hello Mark” returned.

Okay, so it’s working fine, let’s deploy/publish it.

  • Right mouse click on the project
  • Select Publish…
  • Either create new or select existing (if you already have an app), you may need to enter your account details at this point
  • In my case I created the application via the Portal first so I selected existing and then select my application from the options available
  • Press OK
  • Eventually the Publish button will enable, then press this

This will actually publish/deploy the DLL with your Azure function(s) to the Azure application. When you view the function in the Azure Portal you won’t see any source code as you’ve actually just deployed the compiled DLL instead.

In the portal, add your Request body, i.e.

{
   "name" : "Azure"
}

and run from the Portal – this is basically a replica (at the functional level) of the portal’s created default function.

Let’s look at the code

I’m going to show the code created via the Visual Studio template here

public static class Echo
{
   [FunctionName("Echo")]
   public static async Task<HttpResponseMessage> Run(
      [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)]
      HttpRequestMessage req, TraceWriter log)
   {
      log.Info("C# HTTP trigger function processed a request.");

      // parse query parameter
      string name = req.GetQueryNameValuePairs()
         .FirstOrDefault(q => string.Compare(q.Key, "name", true) == 0)
         .Value;

      // Get request body
      dynamic data = await req.Content.ReadAsAsync<object>();

      // Set name to query string or body data
      name = name ?? data?.name;

      return name == null ? 
        req.CreateResponse(HttpStatusCode.BadRequest, 
           "Please pass a name on the query string or in the request body")
       : req.CreateResponse(HttpStatusCode.OK, "Hello " + name);
   }
}

If you’ve used any C# or Java HTTP like functions, this will all seem pretty familiar.

Our function is passed two parameters, the first being the HTTP request message and the second is the logging object. Obviously the logger is pretty self explanatory.

The HttpRequestMessage contains all those sorts of things we’d expect for an HTTP request, i.e. query name parameters, headers, content etc. and like many similar types of functions we return an HTTP response along with a result code.