Monthly Archives: November 2016

Auto generating test data with Bogus

I’ve visited this topic (briefly) in the past using NBuilder and AutoFixture.

When writing unit tests it would be useful if we can create objects quickly and with random or better still semi-random data.

When I say semi-random, what I mean is that we might have some type with an Id property and we know this Id can only have a certain range of values, so we want a value generated for this property within that range, or maybe we would like to have a CreatedDate property with some data that resembles n years in the past, as opposed to just random date.

This is where libraries such as Faker.Net and Bogus come in – they allow us to generate objects and data, which meets certain criteria and also includes the ability to generate data which “looks” like real data. For example, first names, jobs, addresses etc.

Let’s look at an example – first we’ll see what the “model” looks like, i.e. the object we want to generate test data for

public class MyModel
{
   public string Name { get; set; }
   public long Id { get; set; }
   public long Version { get; set; }
   public Date Created { get; set; }
}

public struct Date
{
   public int Day { get; set; }
   public int Month { get; set; }
   public int Year { get; set; }
}

The date struct was included because this mirrors a similar type of object I get from some web services and because it obviously requires certain constraints, hence seemed a good example of writing such code.

Now let’s assume that we want to create a MyModel object. Using Bogus we can create a Facker object and apply constraints or assign random data to. Here’s an example implementation

var modelFaker = new Faker<MyModel>()
   .RuleFor(o => o.Name, f => f.Name.FirstName())
   .RuleFor(o => o.Id, f => f.Random.Number(100, 200))
   .RuleFor(o => o.Version, f => f.Random.Number(300, 400))
   .RuleFor(o => o.Created, f =>
   {
      var date = f.Date.Past();
      return new Date { Day = date.Day, Month = date.Month, Year = date.Year };
   });

var myModel = modelFaker.Generate();

Initially we create the equivalent of a builder class, in this case the Faker. Whilst we can generate a MyModel without all the rules being set, the rules allow us to customize what’s generated to something more meaningful for our use – especially when it comes to the Date type. So in order, the rules state that the Name property on MyModel should resemble a FirstName, the Id is set to a random value within the range 100-200, likewise the Version is constrained to the range 300-400 and finally the Created property is set by generating a past date and assigning the day, month and year to our Date struct.

Finally we Generate an instance of a MyModel object via the Faker builder class. An example of the values supplied by the Faker object are shown below

Created - Day = 12, Month = 7, Year = 2016
Id - 116
Name - Gwen
Version - 312

Obviously this code only works for classes with a default constructor. So what do we do if there’s no default constructor?

Let’s add the following to the MyModel class

public MyModel(long id, long version)
{
   Id = id;
   Version = version;
}

Now we simply change our Faker to look like this

var modelFaker = new Faker<MyModel>()
   .CustomInstantiator(f => 
      new MyModel(
         f.Random.Number(100, 200), 
         f.Random.Number(300, 400)))
   .RuleFor(o => o.Name, f => f.Name.FirstName())
   .RuleFor(o => o.Create, f =>
   {
      var date = f.Date.Past();
      return new Date { Day = date.Day, Month = date.Month, Year = date.Year };
   });

What if you don’t want to create a the whole MyModel object via Faker, but instead, you just want to generate a valid looking first name for the Name property? Or what if you are already using something like NBuilder but want to just use the Faker data generation code?

This can easily be achieved by using the non-generic Faker. Create an instance of it and you’ve got access to the same data, so for example

var f = new Faker();

myModel.Name = f.Name.FirstName();

References

Bogus for .NET/C#

RabbitMQ in Docker

As part of a series of posts on running some core types of applications in Docker, its time to run up a message queue. Let’s try the Docker RabbitMQ container.

To run, simply use the following

docker run -d --hostname my-rabbit --name some-rabbit -p 5672:5672 -p 15672:15672 rabbitmq:3

If you’d like to use the RabbitMQ management tool, then run this instead (which runs RabbitMQ and the management tool)

docker run -d --hostname my-rabbit --name some-rabbit -p 5672:5672 -p 15672:15672 rabbitmq:3-management

Note: default login for the management UI is via a web browser using http://host:15672 obviously replacing the host with your server IP address. The default login and password are guest/guest.

From Visual Studio, create two test applications, one for sending messages and the second for receiving messages (mine are both console application) and using NuGet, add RabbitMQ.Client to them both. I’m going to just duplicate the code from the RabbitMQ tutorial here. So the application for sending messages should look like this

static void Main(string[] args)
{
   var factory = new ConnectionFactory
   {
      HostName = "localhost",
      Port = AmqpTcpEndpoint.UseDefaultPort
   };
   using (var connection = factory.CreateConnection())
   {
      using (var channel = connection.CreateModel())
      {
         channel.QueueDeclare("hello", false, false, false, null);

         var message = "Hello World!";
         var body = Encoding.UTF8.GetBytes(message);

         channel.BasicPublish(String.Empty, "hello", null,body);
         Console.WriteLine(" [x] Sent {0}", message);
      }
   }
   Console.ReadLine();
}

The receiver code looks like this

static void Main(string[] args)
{
   var factory = new ConnectionFactory
   {
      HostName = "localhost",
      Port = AmqpTcpEndpoint.UseDefaultPort
   };
   using (var connection = factory.CreateConnection())
   {
      using (var channel = connection.CreateModel())
      {
         channel.QueueDeclare("hello", false, false, false,null);

         var consumer = new EventingBasicConsumer(channel);
         consumer.Received += (model, ea) =>
         {
            var body = ea.Body;
            var message = Encoding.UTF8.GetString(body);
            Console.WriteLine(" [x] Received {0}", message);
         };
         channel.BasicConsume("hello", true, consumer);

         Console.WriteLine("Press [enter] to exit.");
         Console.ReadLine();
      }
   }
}

Obviously localhost should be replaced with the host name/ip address of the Docker server running RabbitMQ.

Redis service and client

Redis is an in memory store/cache, key/value store. Luckily there’s a Docker image for this.

Let’s run an instance of Redis via Docker on an Ubuntu server

docker run --name myredis -d -p 6379:6379 redis

Oh, how I love Docker (and ofcourse the community who create these images). This will run Redis and return immediately to your host’s command prompt (i.e. we do not go into the instance of Redis).

To run the redis client we’ll need to switch to the instance of the Docker container running Redis and then run the redis command line interface, thus

docker exec -it myredis bash
redis-cli

We’ll use this CLI later to view data in the cache.

C# client

There are several Redis client libraries available for .NET/C#, I’m going to go with ServiceStack.Redis, mainly because I’ve been using ServiceStack recently. So create a Console application, add the nuget packages for ServiceStack.Redis and now add the following code

public class Person
{
   public string FirstName { get; set; }
   public string LastName { get; set; }
}

class Program
{
   static void Main(string[] args)
   {
      var client = new RedisClient("redis://xxx.xxx.xxx.xxx:6379");
      client.Add("1234", new Person {FirstName = "Scooby", LastName = "Doo"});
      client.Save();
   }
}

Obviously change xxx.xxx.xxx.xxx to your server ip address.

This code will simply write the Person object to the store against the key 1234. If you have the redis-cli running then you can type

get 1234

this should result in the following result

"{\"FirstName\":\"Scooby\",\"LastName\":\"Doo\"}"

Ofcourse, we now need to use the ServiceStack.Redis client to read our data back. Just use this

var p = client.Get<Person>("1234");

Security

By default Redis has no security set up, hence we didn’t need to specify a user name and password. Obviously in a production environment we’d need to implement such security (or if using Redis via a cloud provider such as Azure).

For our instance we can secure Redis as a whole using the command AUTH. So from redis-cli run

CONFIG SET requirepass "password"
AUTH "password"

If you run AUTH “password” and get Err Client sent AUTH, but no password is set you’ll need the CONFIG line, otherwise the AUTH line should work fine. Our client application will need the following changes to the URL

var client = new RedisClient("redis://password@xxx.xxx.xxx.xxx:6379");

To remove the password (if you need to) simple type the following from the redis-cli

CONFIG SET requirepass ""

References

https://github.com/ServiceStack/ServiceStack.Redis

MongoDB revisited

As I’m busy setting up my Ubuntu server, I’m going to revisit a few topics that I’ve covered in the past, to see whether there are changes to working with various servers. Specifically I’ve gone Docker crazy and want to run these various server in Docker.

First up, let’s see what we need to do to get a basic MongoDB installation up and running and the C# client to access it (it seems some things have changes since I last did this).

Getting the server up and running

First off we’re going to get the Ubuntu server setup with an instance of MongoDB. So let’s get latest version on mongo for Docker

docker pull mongo:latest

this will simply download the latest version of the MongoDB but not run it. So our next step is to run the MongoDB Docker instance. By default the port MongoDB uses is 27017, but this isn’t available to the outside world. So we’re going to want to map this to a port accessible to our client machine(s). I’m going to use port 28000 (there’s no specific reason for this port choice). Run the following command from Ubuntu

docker run -p 28000:27017 --name my-mongo -d mongo

We’ve mapped MongoDB to the previously mentioned port and named the instance my-mongo. This will run MongoDB in the background. We can now look to write a simple C# client to access the instance.

Interactive Shell

Before we proceed to the client, we might wish to set-up users etc. in MongoDB and hence run its shell. Now running the following

docker exec -t my-mongo mongo

Didn’t quite work as expected, whilst I was placed inside the MongoDB shell, commands didn’t seem to run.

Note: This could be something I’m missing here, but when pressing enter, the shell seemed to think I was about to add another command.

To work with the shell I found it simpler to connect to the Docker instance using bash, i.e.

docker exec -t my-mongo bash

then run

mongo

to access the shell.

I’m not going to set up any users etc. at this point, we’ll just used the default setup.

Creating a simple client

Let’s fire up Visual Studio 2015 and create a console application. Then using NuGet add the MongoDB.Driver by MongoDB, Inc. Now add the following code to your Console application

public class Person
{
   public ObjectId Id { get; set; }
   public string FirstName { get; set; }
   public string LastName { get; set; }
   public int Age { get; set; }
}

class Program
{
static void Main(string[] args)
{
   var client = new MongoClient("mongodb://xxx.xxx.xxx.xxx:28000");
   var r = client.GetDatabase("MyDatabase");
   var collection = r.GetCollection<Person>("People");
   collection.InsertOne(new Person 
   { 
      FirstName = "Scooby", 
      LastName = "Doo", 
      Age = 27 
   });
}

Obviously replace the xxx.xxx.xxx.xxx with the IP address of your server (in my case my Ubuntu server box), the port obviously matches the port we exposed via Docker. You don’t need to “create” the database explicitly via the shell or a command, you can just run this code and it’ll create MyDatabase then the table People and then insert a record.

Did it work?

Hopefully your Console application just inserted a record. There should have been no timeout or other exception. Ofcourse we can use the Console application, for example

var client = new MongoClient("mongodb://xxx.xxx.xxx.xxx:28000");
var r = client.GetDatabase("MyDatabase");
var collection = r.GetCollection<Person>("People");
foreach (var p in collection.FindSync(_ => true).ToList())
{
   Console.WriteLine($"{p.FirstName} {p.LastName}");                
}

I’m using the synchronous methods to find and create the list, solely because my Console application is obviously pretty simple, but the MongoDB driver library offers Async versions of these methods as well.

The above code will write out Scooby Doo as the only entry in our DB, so all worked fine. How about we do the same thing using the shell.

If we now switch back to the server and if its not running, run the MongoDB shell as previously outlined. From the shell run the following

use MyDatabase
db.People.find()

We should now see a single entry

{ 
  "_id" : ObjectId("581d9c5065151e354837b8a5"), 
  "FirstName" : "Scooby", 
  "LastName" : "Doo", 
  "Age" : 27 
}

Just remember, we didn’t set this instance of MongoDB up to use a Docker Volume and hence when you remove the Docker instance the data will disappear.

So let’s quickly revisit the code to run Mongo DB within Docker and fix this. First off exit back to the server’s prompt (i.e. out of the Mongo shell and out of the Docker bash instance).

Now stop my-mongo using

docker stop my-mongo

You can restart mongo at this point using

docker start my-mongo

and your data will still exist, but if you run the following after stopping the mongo instance

docker rm my-mongo

and execute Mongo again the data will have gone. If we add a volume command to the command line argument, and so we will execute the following

docker run -p 28000:27017 -v mongodb:/data/mongodb --name my-mongo -d mongo

the inclusion of the /v will map the mongodb data (/data/mongodb) to the volume on the local machine named mongodb. By default this is created in /var/lib/docker/volumes, but ofcourse you could supply a path to an alternate location

Remember, at this point we’re still using default security (i.e. none), I will probably create a post on setting up mongo security in the near future

.NET Core

This should be a pretty short post, just to outline using .NET core on Linux/Ubuntu server via Docker.

We could look to install .NET core via apt-get, but I’ve found it so much simpler running up a Docker container with .NET core already implemented in, so let’s do that.

First up, we run

docker run -it microsoft/dotnet:latest

This is an official Microsoft owned container. The use of :latest means we should get the latest version each time we run this command. The -it switch switches us straight into the Docker instance when it’s started, I/e/ into bash.

Now this is great if you’re happy to lose any code within the Docker image when it’s removed, but if you want to link into your hosts file system it’s better to run

docker run -it -v /home/putridparrot/dev:/development microsoft/dotnet:latest

Where /home/putridparrot/dev on the left of the colon, is a folder on your host/server filesystem and maps to a folder inside the Docker instance which will be named development (i.e. it acts like a link/shortcut).

Now when you are within the Docker instance you can save files to the host (and vice/versa) and they’ll persist beyond the life of the Docker instance and also allow us a simply means of copying files from, say a Windows machine into the instance of dotnet on the Linux server.

And that literally is that.

But let’s write some code and run it to prove everything is working.

To be honest, you should go and look at Install for Windows (or one of the installs for Linux or Mac) as I’m pretty much going to recreate the documentation on running dotnet from these pages

To run the .NET core framework, this includes compiling code, we use the command

dotnet

Before we get going, .NET core is very much a preview/RC release, so this is the version I’m currently using (there’s no guarantee this will work the same way in a production release), running

dotnet --version

we get version 1.0.0-preview2-003131.

Let’s create the standard first application

Now, navigate to home and then make a directory for some source code

cd /home
mkdir HelloWorld
cd /HelloWorld

yes, we’re going to write the usual, Hello World application, but that’s simply because the dotnet command has a built in “new project” which generates this for us. So run

dotnet new

This creates two files, Program.cs looks like this

using System;

namespace ConsoleApplication
{
    public class Program
    {
        public static void Main(string[] args)
        {
            Console.WriteLine("Hello World!");
        }
    }
}

nothing particularly interesting here, i.e. its a standard Hello World implementation.

However a second file is created (which is a little more interesting), the project.json, which looks like this

{
  "version": "1.0.0-*",
  "buildOptions": {
    "debugType": "portable",
    "emitEntryPoint": true
  },
  "dependencies": {},
  "frameworks": {
    "netcoreapp1.0": {
      "dependencies": {
        "Microsoft.NETCore.App": {
          "type": "platform",
          "version": "1.0.1"
        }
      },
      "imports": "dnxcore50"
    }
  }
}

Now, we need to run the following from the same folder as the project (as it will use the project.json file)

dotnet restore

this will restore any packages required for the project to build. To build and run the program we use

dotnet run

What are the dotnet cmd options

Obviously you can run dotnet –help and find these out yourself, but just to give a quick overview, this is what you’ll see as a list of commands

Common Commands:
  new           Initialize a basic .NET project
  restore       Restore dependencies specified in the .NET project
  build         Builds a .NET project
  publish       Publishes a .NET project for deployment (including the runtime)
  run           Compiles and immediately executes a .NET project
  test          Runs unit tests using the test runner specified in the project
  pack          Creates a NuGet package

.NET Core in Visual Studio 2015

Visual Studio (2015) offers three .NET core specific project templates, Class Library, Console application and ASP.NET Core.

References

https://docs.microsoft.com/en-us/dotnet/articles/core/tutorials/using-with-xplat-cli
https://docs.asp.net/en/latest/getting-started.html
https://docs.microsoft.com/en-us/dotnet/articles/core/tools/project-json

Hosting a github like service on my Ubuntu server

I was thinking it would be cool to host some private git repositories on my Ubuntu server. That’s where Gogs comes in.

Sure I could have created a git repository “manually” but why bother when Gogs comes in a docker instance as well (and I love Docker).

Excellent documentation is available on the Gogs website or via Docker for Gogs on GitHub. But I will be recreating the bare essentials here so I know what worked for me.

First off, we’re going to need to create a Docker volume. For those unfamiliar with Docker, data is stored in the instance of Docker and therefore lost when it shuts down. To allow data to persist beyond the life of the Docker instance, we create a Docker volume.

So first up, I just used the default Docker volume location using

docker volume create --name gogs-data

On my Ubuntu installation, this folder is created within /var/lib/docker.

Next up we’ll simply run

docker run --name=gogs -p 10022:22 -p 10080:3000 -v gogs-data:/data gogs/gogs

This command will download/install an instance of gogs and it will be available on port 10080 of the server. The instance will connect it’s data through to our previously created volume using the -v switch.

From your chosen web browser, you can display the Gogs configuration web page using http://your-server:10080.

As I didn’t have MySQL or PostgresSQL set-up at the time of installation, I simply chose SQLite from the database options shown on the configuration page.

That’s pretty much it – you’ll need to create a user login for your account and then you can start creating repositories.

I’m not sure if I’m doing something wrong, but I noticed that when you want to clone a repository, Gogs says I should clone

http://localhost:3000/mark/MyApp.git

Obviously localhost is no use from a remote machine and also 3000 seems to be incorrect, and should be 10080 (as specified in the run command).

Building a Linux based NAS

I’ve put together a Linux (Ubuntu) based NAS device. I’m going to list the steps I took to get it all up and running. As a disclaimer though I need to state I am not a Linux expert, so don’t take this as the perfect solution/setup.

Starting point

First off, I actually had a Windows Home Server NAS device, but the hard disk controller died. This means I have a bunch of NTFS formatted drives with lots of media files on and so I need the new NAS to be able to use these drives.

I bought myself a DELL PowerEdge T20 Tower Server which will act as my NAS, it’s a really well spec’d and priced piece of hardware, although a little larger than I would normally want for a NAS and without any easy to access hard drive bays – basically it’s a small tower cased computer, but for the price it’s superb.

Next up I’ve installed Ubuntu Server. So obviously this is so much more than just a NAS, but I’ll concentrate on creating that side of things in this post.

Mounting my drives

After all the hard drives were connected I needed to mount them. As stated, these are already formatted to NTFS.

The first thing we need to do is create some folders to act as the mount points using

mkdir /mnt/foldername

Next up we need to actually mount the drives and assign them to the mount point, for this we’ll use

sudo mount -t ntfs /dev/sda2 /mnt/foldername

Obviously the /dev/sda2 needs to be set to your drive. The easiest way to check what devices/drives exists is use

sudo fdisk -l

or if, like me, you’re looking for the NTFS drives you can use

sudo fdisk -l | grep NTFS

Mounting the drives like this, is transient and the mount will be lost when the NAS reboots, so we need to set them up to “automount” at startup. To do this we need to edit the fstab file, i.e.

sudo nano /etc/fstab

Whilst we can use the /dev/sda2 way of mounting, it’s far better to use the device’s UUID as this will allow us to hot-swap the drives or the like. To find the UUID of your drives simply run

sudo blkid

Now in the fstab file we’ll add lines like this

UUID=123456789ABCDEF / ntfs defaults 0 2

Using Samba to access the drives

Whilst the previous steps have mounted the drives which are now accessible via the server, we want these drives accessible from Windows and the internal network. Samba allows us to expose the drives, or more specifically their folders to the LAN.

We need to edit the smb.conf file, so run

sudo nano /etc/samba/smb.conf

Here’s an example of an entry for one of our shared drives/folders. In this case assume “share-name” to be the name you want to see in your network browser. The folder-path should point to your mounted drive and or any folders you want to expose

[share-name]
   path = /path/folder-path
   read only = yes
   browseable = yes

In this case, we’re stating the share can be browsed for (i.e. it’ll be visible in Windows Explorer when we access the server using \\mynas). In this case I’ve also set the folder to be read only. We add an entry for each folder/drive we want to expose.

Users and Permissions

As I have family members each having their own user space on the NAS, I need to create users – this is simply a case of using

sudo useradd username

replacing username with the name of the user. We can list all users using

cut -d: -f1 /etc/passwd

Now the useradd command simply adds a user, this does not create home folders or other private space for the user. As my WHS was set up with folders for each user, we don’t need to use the Linux /home folders, but just for completeness. When we added the users Ubuntu adds information to the /etc/passwd file to show where the user’s home folder is, but it didn’t create the folders, so we’ll create the home directories manually. Create a home folder for each of your user’s using

sudo mkdir /home/username

Now let’s change the ownership of the folders to each user

sudo chown -R username /home/username

to prove everything is working as we want we can run the command

ls -l

from the home folder and it’ll list the owners of each folder

Obviously at this point we can go back to the smb.conf and expose the user’s home folders to Windows and the LAN.

Once we’ve created the Samba folder configurations for the user’s folder, we’ll probably want to look at setting permissions on the folders. We go back to editing smb.conf

[share-name]
   path = /path/folder-path
   read only = yes
   browseable = yes
   write list = bob
   create mask = 0755

We can set up a write list of all user’s who can write to the share (we can use @group, replacing the word group with the actual group name of users as well). Notice the write list will give write access regardless of the read only option.

We can also define a create mask for files and directories (using standard Linux bitwise flags).

How about having the NAS start-up and shutdown automatically

It’d be cool if we can try and conserve energy a little by turning the NAS off when its less likely to be used and back on when it’s most likely to be used.

I haven’t yet implemented this, but have tried the steps in the following document and they worked perfectly, so check out

https://www.mythtv.org/wiki/ACPI_Wakeup#Using_.2Fsys.2Fclass.2Frtc.2Frtc0.2Fwakealarm

Changing the host name

A slight detour here, but when I installed Ubuntu I chose a host name, which on reflection I wanted to change to match the name the old WHS had as this allowed the family to use the new NAS as if it was the old one (i.e. not have to get them to change to using the new NAS name etc.).

We can find the host name using

hostname

and we can also use the same command to set a new hostname using

sudo hostname MyHostName

this is a temporary change, so to make this permanent (i.e. exist after a reboot) we use

sudo nano /etc/hostname
sudo nano /etc/hosts

References

https://help.ubuntu.com/community/Samba/SambaServerGuide
http://askubuntu.com/questions/113733/how-do-i-correctly-mount-a-ntfs-partition-in-etc-fstab
http://askubuntu.com/questions/205841/how-do-i-mount-a-folder-from-another-partition