Category Archives: Azure

Combining Serilog and Table Storage

As I’ve written posts on Serilog and Azure Storage recently, I thought I’d try to combine them both and use Azure table storage for the logging sink. Thankfully, I didn’t need to code anything myself, as this type of sink has already been written, see the AzureTableStorage sink.

If we take the basic code from Serilog revisited (now version 2.5) and Azure Table Storage and include the glue of AzureTableStorage. We get the following

  • Add NuGet Package Serilog.Sinks.AzureTableStorage
var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
   "StorageConnectionString"));

Log.Logger = new LoggerConfiguration()
   .WriteTo.AzureTableStorage(storageAccount, storageTableName: "MyApp")
   .CreateLogger();

Log.Logger.Information("Application Started");

for (var i = 0; i < 10; i++)
{
   Log.Logger.Information("Iteration {I}", i);
}

Log.Logger.Information("Exiting Application");

Without the storageTableName we’ll have the table LogEventEntity as the default table name. That’s all there is to it, now we’re sending our log entries into the cloud.

Azure Table Storage

Table storage is a schema-less data store, so has some familiarity to No-SQL databases.

For this post I’m just going to cover the code snippets for the basic CRUD type operations.

Creating a table

We can create a table within table storage using the following code.

At this point the table is empty of both data and ofcourse, being schema-less it has no form, i.e. it’s really just an empty container at this point.

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
      "StorageConnectionString"));

var client = storageAccount.CreateCloudTableClient();
var table = client.GetTableReference("plants");

table.CreateIfNotExists();

Deleting a table

Deleting a table is as simple as this

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
      "StorageConnectionString"));

var client = storageAccount.CreateCloudTableClient();
var table = client.GetTableReference("mytable");

table.Delete();

Entities

Using the Azure table API we need to implement the ITableEntity interface or derive our entities from the TableEntity class. For example

class Plant : TableEntity
{
   public Plant()
   {
   }

   public Plant(string type, string species)
   {
      PartitionKey = type;
      RowKey = species;
   }

   public string Comment { get; set; }
}

In this simple example we map the type of plant to a ParitionKey and the species to the RowKey, obviously you might prefer using Guid’s or other ways of keying into your data. The thing to remember is that, the ParitionKey/RowKey must be unique to the table. Obviously the example above is not going to make code very readable so it’s more likely that we’d also declare properties with more apt names, such as Type and Species, but it was meant to be a quick and simple piece of code.

Writing an entity to table storage

Writing of entities (and many other entity operations) is handled by the Execute method on the table. Which operation we use is determined by the TableOperation passed as a parameter to the Execute method.

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
      "StorageConnectionString"));

var client = storageAccount.CreateCloudTableClient();
var table = client.GetTableReference("plants");

table.CreateIfNotExists();

var p = new Plant("Flower", "Rose")
{
   Comment = "Watch out for thorns"
};

table.Execute(TableOperation.Insert(p));

This will throw an exception if we already have an entity with the PartitionKey/RowKey combination in the table storage. So we might prefer to tell the table storage to insert or update…

Updating entities within table storage

If we prefer to handle both insertion OR updating within a single call we can use the TableOperation.InsertOrReplace method

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
      "StorageConnectionString"));

var client = storageAccount.CreateCloudTableClient();
var table = client.GetTableReference("plants");

var p = new Plant("Flower", "Rose")
{
   Comment = "Thorns along the stem"
};

table.Execute(TableOperation.InsertOrReplace(p));

There’s also a TableOperation.InsertOrMerge which in essence merges new properties (if new one’s exist) onto an existing entity if the entity already exists.

Retrieving entities from table storage

Retrieving an entity by it’s ParitionKey/RowKey is accomplished using the TableOperation Retrieve.

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
      "StorageConnectionString"));

var client = storageAccount.CreateCloudTableClient();
var table = client.GetTableReference("plants");

var entity = (Plant)table.Execute(
   TableOperation.Retrieve<Plant>(
      "Flower", "Rose")).Result;

Console.WriteLine(entity.Comment);

Deleting an entity from table storage

Deleting an entity is a two stage process, first we need to get the entity and then we can pass this to the Execute method with the TableOperation.Delete and the entity will be removed from the table storage.

Note: obviously I’ve not included error handling in this or other code snippets. Particularly here where a valid entity may not be found.

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
      "StorageConnectionString"));

var client = storageAccount.CreateCloudTableClient();
var table = client.GetTableReference("plants");

var entity = (Plant)table.Execute(
   TableOperation.Retrieve<Plant>(
      "Flower", "Crocus")).Result;
table.Execute(TableOperation.Delete(entity));

Query Projections

In cases where, maybe our data has many properties (for example), we might prefer to query against our data and use projection capabilities to reduce those properties retrieved. To do this we use the TableQuery. For example let’s say all we’re after is the Comment from our entities, then we could write the following

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
   "StorageConnectionString"));

var client = storageAccount.CreateCloudTableClient();
var table = client.GetTableReference("plants");

var projectionQuery = new TableQuery<DynamicTableEntity>()
   .Select(new [] { "Comment" });

EntityResolver<string> resolver = 
   (paritionKey, rowKey, timeStamp, properties, etag) => 
      properties.ContainsKey("Comment") ? 
      properties["Comment"].StringValue : 
      null;

foreach (var comment in table.ExecuteQuery(projectionQuery, resolver))
{
   Console.WriteLine(comment);
}

The TableQuery line is where we create the projection, i.e. what properties we want to retrieve. In this case we’re only interested in the “Comment” property. But we could add other properties (excluding the standard ParitionKey, RowKey, and Timestamp properties as these will be retrieved anyway).

The next line is the resolver which is passed to ExecuteQuery along with the projectionQuery. This is basically a predicate which acts as the “custom deserialization logic”. See Windows Azure Storage Client Library 2.0 Tables Deep Dive. Whilst an old article, it’s still very relevant. Ofcourse, the example above, shows using an anonymous delegate, in situations where we’re doing a lot of these sorts of projection queries we’d just create a method for this and pass that into ExecuteQuery as the resolver.

Querying using LINQ

Whilst LINQ is supported for querying table storage data, at the time of writing, it’s a little limited or requires you to write your queries in a specific way.

Let’s first look at a valid LINQ query against our plant table

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
   "StorageConnectionString"));

var client = storageAccount.CreateCloudTableClient();
var table = client.GetTableReference("plants");

var query = from entity in table.CreateQuery<Plant>()
   where entity.Comment == "Thorns along the stem"
   select entity;

foreach (var r in query)
{
   Console.WriteLine(r.RowKey);
}

In this example we’ll query the table storage for any plant’s with a Comment “Thorns along the stem”, but now if we were to try to query for a Comment which contains the word “Thorns”, like this

var query = from entity in table.CreateQuery<Plant>()
   where entity.Comment.Contains("Thorns along the stem")
   select entity;

Sadly we’ll get a (501) Not Implemented back from the table storage service. So there’s obviously a limit to how we query our table storage data, which is fair enough. Obviously if we want more complex query capabilities we’d probably be best served using a different data store.

We can also use projections on our query, i.e.

var query = from entity in table.CreateQuery<Plant>()
   where entity.Comment == "Thorns along the stem"
   select entity.RowKey;

or using anonymous types, such as

var query = from entity in table.CreateQuery<Plant>()
   where entity.Comment == "Thorns along the stem"
   select new
   {
      entity.RowKey,
      entity.Comment
   };

Using Azure Queues

Azure Queue sits inside your Azure storage and allows messages to flow through a queue system (similar in some ways, but not as fully featured as MSMQ, TIBCO etc.).

Creating a Queue

Obviously we can create a queue using the Azure Portal or Azure Storage Explorer, but let’s create queue via code, using the following

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
      "StorageConnectionString"));

var client = storageAccount.CreateCloudQueueClient();
var queue = client.GetQueueReference("my-queue");
queue.CreateIfNotExists();

Sending a message to our Queue

Adding a message to the queue is as simple as, the following

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
      "StorageConnectionString"));

var client = storageAccount.CreateCloudQueueClient();
var queue = client.GetQueueReference("my-queue");

var message = new CloudQueueMessage("Hello World");
queue.AddMessage(message);

Peeking at a Queue

We can peek at a message on the queue, which basically means we can look at the message without affect it, using

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
      "StorageConnectionString"));

var client = storageAccount.CreateCloudQueueClient();
var queue = client.GetQueueReference("my-queue");

var message = queue.PeekMessage();
Console.WriteLine(message.AsString);

As the name suggests, in this instance we’re peeking at the current or at least, most recent message. But we can also use the PeekMessages method to enumerate over a number of messages on the queue.

Here’s an example of peeking at 32 messages (it appears 32 is the maximum number of messages we’re allowed to peek, currently anything above this causes a bad request exception)

foreach (var message in queue.PeekMessages(32))
{
   Console.WriteLine(message.AsString);
}

Getting messages

Unlike, for example TIBCO RV, Azure Queue’s do not have subscribers and therefore do not push messages to subscribers (like and event might). Once a message is de-queued it will be marked as invisible (see CloudQueue.GetMessage Method.

To de-queue a message we use GetMessage on a Queue. As one might expect, once the message is marked as invisible, subsequent calls to GetMessage will not return the hidden message until the visibility timeout is reached and the message will then reappear and be available again from subsequent GetMessage calls.

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
      "StorageConnectionString"));

var client = storageAccount.CreateCloudQueueClient();
var queue = client.GetQueueReference("my-queue");

var message = queue.GetMessage();

Console.WriteLine(message.AsString);

Now if you change the var message = queue.GetMessage(); line to the following

var message = queue.GetMessage(TimeSpan.FromSeconds(10));

and then within the Azure Portal or Azure Storage explorer refresh immediately after it’s de-queued the message will disappear but then refreshing again after 10 seconds and the message will reappear in the queue with it’s dequeue count incremented.

Like PeekMessages, we can call GetMessages to get a batch of messages (between 1 and 32 messages).

Deleting a message

To remove a message altogether use

queue.DeleteMessage(message);

This would usually be called after GetMessage is called, but obviously this is dependent upon your requirements. It might be called after a certain dequeuer count or simply after every GetMessage call, but remember if you do not delete the message it will reappear on your queue until an it’s maximum time to live ends, as supplied via the AddMessage method.

Using Azure File Storage

File storage is pretty much what is says in the tin. It’s a shared access file system using the SMB protocol. You can create directories, subdirectories and store files in those directories – yes, it’s a file system.

Reading a file

Using the Azure Portal either locate or create a storage account, within this create a File Storage, then create a share. Within the share upload a file, mine’s the Hello World.txt file with those immortal words Hello World within it.

Let’s read this file from our client. In many ways the client API is very similar to that used for Blob storage (as one might expect).

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
      "StorageConnectionString"));

var fileClient = storageAccount.CreateCloudFileClient();

var share = fileClient.GetShareReference("myfiles");
var root = share.GetRootDirectoryReference();
var file = root.GetFileReference("Hello World.txt");

var contents = file.DownloadText();

Uploading a file

We can upload a file using the UploadFromFile. In this example we’ll just upload to the root folder of the myfiles share

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
      "StorageConnectionString"));

var fileClient = storageAccount.CreateCloudFileClient();

var share = fileClient.GetShareReference("myfiles");
var root = share.GetRootDirectoryReference();
var file = root.GetFileReference("Hello World.txt");
file.UploadFromFile("Hello World.txt");

Deleting a file

Deleting files is as simple as, the following

var storageAccount = CloudStorageAccount.Parse(
   CloudConfigurationManager.GetSetting(
      "StorageConnectionString"));

var fileClient = storageAccount.CreateCloudFileClient();

var share = fileClient.GetShareReference("myfiles");
var root = share.GetRootDirectoryReference();
var file = root.GetFileReference("Hello World.txt");
file.Delete();

References

Introduction to Azure File storage
File Service REST API

Using Azure Blob Storage

Blob storage has the concept of containers (which can be thought of as directories) and those containers can contain BLOB’s. Containers cannot contain containers and hence differ from a file system in structure.

Containers can be private or allow anonymous read access to BLOBs only or allow anonymous read access for containers and BLOBs.

In the Azure Portal, if you haven’t already got one set up, create a storage account then create a Blob service. Next create a container, mine’s named test. Next, upload a file, I’ve uploaded Hello World.txt which as you imagine simply has the line Hello World within it.

When you create a Blob service you’re assigned an endpoint name, along the lines of https://<storage account>.blob.core.winodws.net/

When we add containers, these get the URL https://<storage account>.blob.core.windows.net/<container name> and files get the URL https://<storage account>.blob.core.windows.net/<container name>/<file name>

A comprehensive document on using .NET to interact with the Blob storage can be found at Get started with Azure Blob storage using .NET.

Reading a Blob

Here’s some code to read our uploaded Hello World.txt file

Firstly we need to use NuGet to add package WindowsAzure.Storage and Microsoft.WindowsAzure.ConfigurationManager.

using Microsoft.Azure;
using Microsoft.WindowsAzure.Storage;

private static void ReadBlob()
{
   var storageAccount = CloudStorageAccount.Parse(
      CloudConfigurationManager.GetSetting(
         "StorageConnectionString"));

   var blobClient = storageAccount.CreateCloudBlobClient();
   var container = blobClient.GetContainerReference("test");
   var blob = container.GetBlockBlobReference("Hello World.txt");
   var contents = blob.DownloadText();

   Console.WriteLine(contents);
}

In the example our file is a text file, but we can also access the blob as a stream using DownloadToStream (plus there’s a whole bunch of other methods for accessing the blobs).

Writing a Blob

We can write blobs pretty easily also

public static void WriteBlob()
{
   var storageAccount = CloudStorageAccount.Parse(
      CloudConfigurationManager.GetSetting(
         "StorageConnectionString"));

   var blobClient = storageAccount.CreateCloudBlobClient();

   var container = blobClient.GetContainerReference("test");

   var blob = container.GetBlockBlobReference("new.txt");
   blob.UploadFromFile("new.txt");
}

In this example, as you’ll see, we still do the standard, connect to cloud via the blob client, get a reference to the container we want to interact with, but next we get a blob to a file (in this case new.txt didn’t exist) and then upload or write from a stream to blob storage. If “new.txt” does exist in the blob storage it’ll simply be overwritten.

Deleting a Blob

We’ve looked at Creation/Update of blobs and Retrieval or them so let’s complete CRUD operations on blobs with delete

public static void DeleteBlob()
{
   var storageAccount = CloudStorageAccount.Parse(
      CloudConfigurationManager.GetSetting(
         "StorageConnectionString"));

   var blobClient = storageAccount.CreateCloudBlobClient();

   var container = blobClient.GetContainerReference("test");

   var blob = container.GetBlockBlobReference("new.txt");
   blob.DeleteIfExists();
}

References

Along with the standard CRUD type operations we can carry out actions on containers, list blobs etc.

See Get started with Azure Blob storage using .NET for more information.

The Blob Service REST API lists the REST API’s if you’d prefer to bypass the Microsoft libraries for accessing the Blobs (or need to implement in a different language).

Azure Storage

Blobs, Files, Tables and Queues – with Azure Cosmos DB in preview (at the time of writing), these are the currently supported Storage account options in Azure. Ofcourse, along side the storage account options are SQL databases which I’ll cover in a post another time.

Usually when you create an application within Azure’s portal, you’ll also have an App Service plan created (with your deployment location setup) and a Storage Account will be setup automatically for you.

Note: You can view your storage account details via the Azure portal or using the Windows application, Storage Explorer, which also allows access to local storage accounts for development/testing.

Blobs

The Blob container is used internally by Azure applications to store information regarding webjobs (and probably more, although I’ve not yet experience of all options), but for us developers and our applications (or just for used a plain old storage) we can use it to store any types of file/binary data. The difference between the Blob storage and file is more down to the way things are stored.

Within Blob storage we create containers (such as Azure creates for our applications/connections) and we can make a container private, blob anonymous read (for blob read access, i.e. publicly access individual blobs) and container anonymous read (for container and blob reads, which allows us to publicly view containers and the blobs within).

Files

Unsurprisingly, given that Blob storage acts a little like a file system, Azure also includes a File storage mechanism which one can look at like a file share (which uses SMB 3.0 protocol at the time of writing). Again this is used internally by Azure applications for log file storage. There’s little more to say except that file storage allows us to create directories and files or subdirectories as one would expect. We can also set quota on our shares to ensure the total size of files on the share doesn’t exceed a threshold (in GB’s, currently there’s a limit of 5120 GB).

Tables

Table storage allows us to work with entity storage. This is more of a No SQL (or key/value storage) offering than the SQL databases options within Azure. We’re definitely not talking relational databases here, but instead we have a fast mechanism for storing application data. Again, Azure already uses table storage within our storage account for internal data.

Check out this post Working with 154 million records on Azure Table Storage – the story of “Have I been pwned?” on performance within Table storage, Troy Hunt’s storing far more than I currently have within Table Storage.

Table storage stores entities whereby each entity stores key/value pairs representing the property name and the property value (as one would expect). Along with the entity’s properties we also need to define three system properties, ParitionKey (a string which identifies the partition an entity belongs to), a RowKey (a string unique identifier for the entity within the partition) and a Timestamp (a DateTime which indicates when a the entity was last modified).

Note: An entity, according to Get started with Azure Table storage using .NET can be upto 1MB in size and can have a maximum of 252 properties.

In C# terms we can simply derive our entity object from the Microsoft.WindowsAzure.Storage.Table.TableEntity which will supply the required properties. For example

using Microsoft.WindowsAzure.Storage.Table;

public class PersonEntity : TableEntity
{
   public PersonEntity()
   {
      // We need to expose a default ctor
   }

   public PersonEntity(string firstName, string lastName)
   {
      PartitionKey = lastName;
      RowKey = firstName;
   }

   public int Age { get; set; }
}

In the above we’ve declared the last name as the ParitionKey and the RowKey as the first name, obviously a better strategy will be required in production systems, see Designing a Scalable Partitioning Strategy for Azure Table Storage and Azure Storage Table Design Guide: Designing Scalable and Performant Tables for more information on partition strategy keys.

For more information on developing C#/.NET code for Table Storage, check out Get started with Azure Table storage using .NET.

Queues

Not as fully featured as MSMQ, but Queues offer a message queue service. This feature allows us to communicate between two separate applications and/or Azure functions to put together a composite application.

Let’s go to Azure functions and create a C# QueueTrigger function, the default code will simply log the queue’s message and looks like this

using System;

public static void Run(string myQueueItem, TraceWriter log)
{
    log.Info($"C# Queue trigger function processed: {myQueueItem}");
}

You’ll need to set the Queue name to the name of the queue you’ll be sending messages on, so let’s call ours test-queue.

Now Run the function.

From Storage explorer or from another instance of the Azure portal (as we want to watch the Azure function running) create a Queue named test-queue and that’s basically it.

Now Add a message to the queue and watch your Azure function log. It should display your message. After a message has been received it’s automatically removed from the queue.

Cosmos DB

At the time of writing Azure Cosmos DB is in preview. So anything I mention here should be by changed by the time it’s out of preview.

I won’t go through ever capability of Cosmos DB, you can check out the Welcome to Azure Cosmos DB link for that, but some of the key capabilities that interest me the most include distributed data across regions, uses “standard” API’s including MongoDB and Table API and well as tunable consistency.

Writing your first Azure Function

With serverless all the rage at the moment and AWS Lambda’s and Azure Functions offering us easy routes into serverless in the cloud, I thought it about time I actually looked at what I need to do to create a simple Azure function.

Creating a function using the portal

  • Log into https://portal.azure.com/
  • From the Dashboard select New
  • Select Function App
  • Supply an App name, set your subscription, location etc. then press Create (I pinned mine the dashboard also using the Pin to dashboard checkbox)
  • After a while (it’s not instant) you’ll be met with options for your Function Apps, Functions, Proxies and Slots.
  • Click in the + of functions
  • Select HttpTrigger – C# or whatever your preferred language is, other triggers can be investigated later but this basically creates a simple REST like function, perfect for quick testing
  • Give your function a name and for now leave the Authorization level alone

I used the default name etc and got the following code generated for me by Azure’s wizard.

Note: Whilst there is some information out there on renaming Azure functions, the current portal doesn’t seem to support this, so best to get your name right first time or be prepared to copy and paste code for now into a new function.

using System.Net;

public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log)
{
    log.Info("C# HTTP trigger function processed a request.");

    // parse query parameter
    string name = req.GetQueryNameValuePairs()
        .FirstOrDefault(q => string.Compare(q.Key, "name", true) == 0)
        .Value;

    // Get request body
    dynamic data = await req.Content.ReadAsAsync<object>();

    // Set name to query string or body data
    name = name ?? data?.name;

    return name == null
        ? req.CreateResponse(HttpStatusCode.BadRequest, "Please pass a name on the query string or in the request body")
        : req.CreateResponse(HttpStatusCode.OK, "Hello " + name);
}

This is basically the Hello World of Azure functions. If you run it from the portal you’ll see the output “Hello Azure” as well as logs output. The request box allows you to edit the “name” field which is sent to the function.

By default the authorization level is Function, which basically means that a key is required to run this function. If you click the </> Get function URL link, Azure will supply the fully URL, minus the request payload and this will include the text code=abc123 where abc123 represents a much longer key.

Note: you can find your keys also via the Manage option below the Functions list on the left of the portal. Click the “Click to show” on Function Keys default.

We can remove the requirement for the key and make the function accessible to anonymous users by selecting the Integrate option under the Function name on the left list of the portal and changing Authorization level to Anonymous.

Beware that you’ll be allowing anyone to call your function with the Anonymous option which ultimately ends with your account paying for usage.

Now, once you have the URL for you function just append &name=World to it and navigate to the function via your browser (remove the code= text and don’t bother with the & for anonymous access).

Creating and deploying from Visual Studio 2017

If you have the Azure tools installed then you can create Azure functions directly in Visual Studio (I’m using 2017 for this example).

  • File | New Project
  • Select Cloud | Azure Functions (or search for it via the search box)
  • Name your project, mine’s HelloWorldFunctions
  • At this point I build the project to bring in the NuGet packages required – you can obviously do this later
  • Add a new item to the project, select Azure Function, mine’s named Echo.cs
  • Select HttpTrigger to create a REST like function
  • Leave Access rights to Function unless you want Anonymous access

When completed, we again get the default Hello Azure/World sample code. We’re not going to bother changing this at this time, but instead will look to deploy to Azure. But first, let’s test this function in Visual Studio (offline). Simply click the run button and a console window will open with a web server running mimicking Azure, mine creates makes the function available via

http://localhost:7071/api/Echo

If we run this from a web browser and add the ?name=, like this

http://localhost:7071/api/Echo?name=Mark

we’ll get “Hello Mark” returned.

Okay, so it’s working fine, let’s deploy/publish it.

  • Right mouse click on the project
  • Select Publish…
  • Either create new or select existing (if you already have an app), you may need to enter your account details at this point
  • In my case I created the application via the Portal first so I selected existing and then select my application from the options available
  • Press OK
  • Eventually the Publish button will enable, then press this

This will actually publish/deploy the DLL with your Azure function(s) to the Azure application. When you view the function in the Azure Portal you won’t see any source code as you’ve actually just deployed the compiled DLL instead.

In the portal, add your Request body, i.e.

{
   "name" : "Azure"
}

and run from the Portal – this is basically a replica (at the functional level) of the portal’s created default function.

Let’s look at the code

I’m going to show the code created via the Visual Studio template here

public static class Echo
{
   [FunctionName("Echo")]
   public static async Task<HttpResponseMessage> Run(
      [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)]
      HttpRequestMessage req, TraceWriter log)
   {
      log.Info("C# HTTP trigger function processed a request.");

      // parse query parameter
      string name = req.GetQueryNameValuePairs()
         .FirstOrDefault(q => string.Compare(q.Key, "name", true) == 0)
         .Value;

      // Get request body
      dynamic data = await req.Content.ReadAsAsync<object>();

      // Set name to query string or body data
      name = name ?? data?.name;

      return name == null ? 
        req.CreateResponse(HttpStatusCode.BadRequest, 
           "Please pass a name on the query string or in the request body")
       : req.CreateResponse(HttpStatusCode.OK, "Hello " + name);
   }
}

If you’ve used any C# or Java HTTP like functions, this will all seem pretty familiar.

Our function is passed two parameters, the first being the HTTP request message and the second is the logging object. Obviously the logger is pretty self explanatory.

The HttpRequestMessage contains all those sorts of things we’d expect for an HTTP request, i.e. query name parameters, headers, content etc. and like many similar types of functions we return an HTTP response along with a result code.

Creating a Mobile App on Azure

If you’re wanting a highly scalable cloud based service for mobile applications, you could use the Mobile Apps service in Azure. Mobile Apps offer authentication, data access, offline sync capabilities and more.

Creating the Mobile Apps service

Let’s jump in and simply create our Mobile Apps service.

  • Log into the Azure portal on https://portal.azure.com
  • Select New
  • Select Web + Mobile
  • Select Mobile App
  • Enter a name for your app (which needs to be unique on the azurewebsites.net domain)
  • Supply a new resource group name

Once the new application has been deployed, go to the App Services section in Azure to see the current status of your new service.

Let’s add a database

Let’s take this a little further now – a good possibility is that we need to store data in the cloud for our mobile device. So from your previously created App service, scroll down the Settings tab until you come across the “MOBILE” section and then follow these instructions to create a data connection (which includes creating and SQL Server DB or connecting to an existing one).

  • Select Data Connections from the MOBILE section
  • Press the Add button
  • Leave the default SQL Database for the Type
  • Press on the SQL Database/Configure required settings
  • If you already have an SQL database setup you can select an existing one to use or click the Create a new data base option
  • Assuming you’re creating a new database, supply a database name
  • Click the Configure required settings and supply a server name, admin login, password and set the location of the server

When completed the data connections should list your new database connection.

Easy tables

We can create simple tables and offer CRUD like API’s against our data by using an Azure feature called Easy Tables. This will allow us to use REST and JSON to carry out CRUD operations against our SQL database with no coding on the server. To create Easy Tables select your mobile app service, select the Settings tab and scroll down to the “MOBILE” section, if not already there and then do the following

  • Select the Easy tables option
  • Press the Add button and give your table a name, for example for a blog reader we might start with the list of feeds, hence the table can be Feeds, for now you might leave all the permissions as “Allow anonymous access” just to get up and running
  • You can the click “Manage schema” to add new columns

The URL to you table becomes

https://<your service name>.azurewebsites.net/tables/<your table name>

This now allows us (out of the box, with no extra coding by us on the server) get access to REST API’s to the data, so let’s assume our service name is Mobile and our table name is Feeds, we can use the following to handle basic CRUD operations

  • Create: Use a HTTP POST operation against the URL https://mobile.azurewebsites.net/tables/Feeds
  • Retreive (single item): Use a HTTP GET operation against the URL https://mobile.azurewebsites.net/tables/Feeds/id where id is an identifier which Easy Tables creates on our table by default
  • Retreive (all items): Use a HTTP GET operation against the URL https://mobile.azurewebsites.net/tables/Feeds
  • Update: Use a HTTP method “PATCH” against the URL https://mobile.azurewebsites.net/tables/Feeds/id where id is an identifier which Easy Tables creates on our table by default
  • Delete: Use a HTTP DELETE operation against the URL https://mobile.azurewebsites.net/tables/Feeds/id where id is an identifier which Easy Tables creates on our table by default

Adding authentication

Initially we created the Easy table as “Allow anonymous access”, this ofcourse would allow us to test the API quickly, but obviously leaves the data unsecured. So now let’s put in place the authentication settings.

  • If not already selected, select you Mobile App and from the settings locate the “MOBILE” section, then select Easy tables and finally select you table
  • Click the Change permissions option
  • Simply change the various permissions to Authenticated access only
  • Don’t forget to press the save button

Now if you previously wrote some code to access the table without any form of authentication, you should receive a 401 error from requests to those same services now.

At this point we will need to provide a valid authentication token from an identity provider. Azure mobile apps can be set up to use Facebook, Twitter, Microsoft Account, Google or Azure Active Directory as a trusted identity provider. Along with these providers you could also roll your own custom identity provider.

To use one of the existing providers you’re going to need to get an app and client id plus an app secret, these are usually obtained from the provider’s dev portals.

Once you have this information you can do the following

  • Go to your Azure mobile app and select the Settings tab
  • Locate the Authentication/Authorization option
  • Set the App service Authentication to On
  • Now configure the Authentication provider, in the case of the social media providers simply insert your app id and app secret (or similar) as requested

Once completed you’ll need to use a slightly different URL to access your tables (for example). As you’ll need an OAuth redirect, so we’ll have something like

https://<your service name>.azurewebsites.net/.auth/login/<your identity provider>/callback

where your identity provider would be facebook, twitter, microsoftaccount, google or aad for example.

References

What are Mobile Apps?