Azure Storage

Blobs, Files, Tables and Queues – with Azure Cosmos DB in preview (at the time of writing), these are the currently supported Storage account options in Azure. Ofcourse, along side the storage account options are SQL databases which I’ll cover in a post another time.

Usually when you create an application within Azure’s portal, you’ll also have an App Service plan created (with your deployment location setup) and a Storage Account will be setup automatically for you.

Note: You can view your storage account details via the Azure portal or using the Windows application, Storage Explorer, which also allows access to local storage accounts for development/testing.

Blobs

The Blob container is used internally by Azure applications to store information regarding webjobs (and probably more, although I’ve not yet experience of all options), but for us developers and our applications (or just for used a plain old storage) we can use it to store any types of file/binary data. The difference between the Blob storage and file is more down to the way things are stored.

Within Blob storage we create containers (such as Azure creates for our applications/connections) and we can make a container private, blob anonymous read (for blob read access, i.e. publicly access individual blobs) and container anonymous read (for container and blob reads, which allows us to publicly view containers and the blobs within).

Files

Unsurprisingly, given that Blob storage acts a little like a file system, Azure also includes a File storage mechanism which one can look at like a file share (which uses SMB 3.0 protocol at the time of writing). Again this is used internally by Azure applications for log file storage. There’s little more to say except that file storage allows us to create directories and files or subdirectories as one would expect. We can also set quota on our shares to ensure the total size of files on the share doesn’t exceed a threshold (in GB’s, currently there’s a limit of 5120 GB).

Tables

Table storage allows us to work with entity storage. This is more of a No SQL (or key/value storage) offering than the SQL databases options within Azure. We’re definitely not talking relational databases here, but instead we have a fast mechanism for storing application data. Again, Azure already uses table storage within our storage account for internal data.

Check out this post Working with 154 million records on Azure Table Storage – the story of “Have I been pwned?” on performance within Table storage, Troy Hunt’s storing far more than I currently have within Table Storage.

Table storage stores entities whereby each entity stores key/value pairs representing the property name and the property value (as one would expect). Along with the entity’s properties we also need to define three system properties, ParitionKey (a string which identifies the partition an entity belongs to), a RowKey (a string unique identifier for the entity within the partition) and a Timestamp (a DateTime which indicates when a the entity was last modified).

Note: An entity, according to Get started with Azure Table storage using .NET can be upto 1MB in size and can have a maximum of 252 properties.

In C# terms we can simply derive our entity object from the Microsoft.WindowsAzure.Storage.Table.TableEntity which will supply the required properties. For example

using Microsoft.WindowsAzure.Storage.Table;

public class PersonEntity : TableEntity
{
   public PersonEntity()
   {
      // We need to expose a default ctor
   }

   public PersonEntity(string firstName, string lastName)
   {
      PartitionKey = lastName;
      RowKey = firstName;
   }

   public int Age { get; set; }
}

In the above we’ve declared the last name as the ParitionKey and the RowKey as the first name, obviously a better strategy will be required in production systems, see Designing a Scalable Partitioning Strategy for Azure Table Storage and Azure Storage Table Design Guide: Designing Scalable and Performant Tables for more information on partition strategy keys.

For more information on developing C#/.NET code for Table Storage, check out Get started with Azure Table storage using .NET.

Queues

Not as fully featured as MSMQ, but Queues offer a message queue service. This feature allows us to communicate between two separate applications and/or Azure functions to put together a composite application.

Let’s go to Azure functions and create a C# QueueTrigger function, the default code will simply log the queue’s message and looks like this

using System;

public static void Run(string myQueueItem, TraceWriter log)
{
    log.Info($"C# Queue trigger function processed: {myQueueItem}");
}

You’ll need to set the Queue name to the name of the queue you’ll be sending messages on, so let’s call ours test-queue.

Now Run the function.

From Storage explorer or from another instance of the Azure portal (as we want to watch the Azure function running) create a Queue named test-queue and that’s basically it.

Now Add a message to the queue and watch your Azure function log. It should display your message. After a message has been received it’s automatically removed from the queue.

Cosmos DB

At the time of writing Azure Cosmos DB is in preview. So anything I mention here should be by changed by the time it’s out of preview.

I won’t go through ever capability of Cosmos DB, you can check out the Welcome to Azure Cosmos DB link for that, but some of the key capabilities that interest me the most include distributed data across regions, uses “standard” API’s including MongoDB and Table API and well as tunable consistency.