Category Archives: Docker

Creating an ansible dockerfile

I was trying to run the ansible docker image on Docker hub, but it kept failing, so I went through the steps listed in http://docs.ansible.com/ansible/latest/intro_installation.html inside of an Ubunto docker image

  • apt-get update
  • apt-get install software-properties-common
  • apt-add-repository ppa:ansible/ansible
  • apt-get update
  • apt-get install ansible

Then running ansible –version demonstrated everything was working.

I decided to use the same commands to create a Dockerfile and hence create my own image of ansible.

  • Create an ansible folder
  • Create the file Dockerfile in the ansible folder

Please note, this is not a complete implementation of a Docker image of ansible so much as a starting point for experimenting with ansible.

Now we’ll create a fairly simple image based on ubuntu and install all the libraries etc. as above. So the Dockerfile should have the following contents

FROM ubuntu:latest

RUN apt-get -y update
RUN apt-get -y install software-properties-common
RUN apt-add-repository ppa:ansible/ansible
RUN apt-get -y update
RUN apt-get -y install ansible

From the ansible folder run sudo docker build -t ansible ., if all goes well you’ll have an image name ansible created. Just run sudo docker images to see it listed.

Now run sudo docker run -it ansible to run the image in interactive mode and then within the container run ansible –version to see if everything worked.

Now we’re up and running there’s obviously more to do to really use ansible and hopefully I’ll cover some of those topics in subsequent posts on ansible.

Getting a CI server setup using TeamCity

We all use CI, right ? I’d even like to have a CI server setup at home for my own projects to at least ensure I’ve not done anything silly to tie my code to my machine or the likes, but also to ensure that I can easily recreate the build requirements of my project.

I thought it’d be cool to get TeamCity up and running on my Linux server and (as is usual for me at the moment) I wanted it running in Docker. Luckily there’s an official build on Docker Hub.

Also see TeamCity on Docker Hub – it’s official now!

So first up we need to setup some directories on the host for TeamCity to store data and logs through to (otherwise shutting down Docker will lose our projects).

First off, let’s get the Docker image for TeamCity, run

docker pull jetbrains/teamcity-server

Next create the following directories (or similar wherever you prefer)

~/teamcity/data
~/teamcity/logs

Then fun the following

docker run -it --name teamcity-server  \
    -v ~/teamcity/data:/data/teamcity_server/datadir \
    -v ~/teamcity/logs:/opt/teamcity/logs  \
    -p 8111:8111 \
    jetbrains/teamcity-server

In the above we’ll run an instance of TeamCity named teamcity-server mapping the host directories we create to the datadir and logs of TeamCity. We’ll also map the host port 8111 to the TeamCity port 8111 (host being the first of the two in the above command).

Now if you use your preferred browser to access

http://<ip-address>:8111

You’ll be asked a couple of questions for TeamCity to set up the datadir and DB. I just used the defaults. After reading and accepting the license agreement you’ll be asked to create a user name and password. Finally supply your name/email address etc. and save the changes.

Setting up a build agent

From the Agents page you can click the link Install Build Agents and either get the zip or msi and decompress the zip to a given folder or run the MSI. I’ve simply unzipped the build agent.

We’ll need to create a buildAgent.properties file before we can run the agent. This file should be in the conf folder of the agent.

Change the serverUrl to your TeamCity server (and anything else you might want to change).

Now run the following on the build agent machine (I’m using Windows for the agent)

agent.bat start

Finally click the TeamCity web page’s Unauthorized link in the Agents section and Authorize the agent. If you’re using the free version of TeamCity you can have three build agents.

Turning my Raspberry Pi Zero W into a Tomcat server

I just got hold of a Raspberry Pi Zero W and decided it’d be cool/fun to set it up as a Tomcat server.

Docker

I am (as some other posts might show) a bit of a fan of using Docker (although still a novice), so I went the same route with the Pi.

As per the post Docker comes to Raspberry Pi, run the following from you Pi’s shell

curl -sSL https://get.docker.com | sh

Next, add your username to the docker group (I’m using the standard pi user)

sudo usermod -aG docker pi

Pull Tomcat for Docker

Don’t forget, the Raspberry Pi uses an ARM processor, so whilst Docker can help in deploying many things, the image still needs to have been built on the ARM processor. Hence just trying to pull Tomcat will fail with a message such as

exec user process caused “exec format error”

So to install Tomcat use the izone image

docker pull izone/arm:tomcat

Let’s run Tomcat

To run Tomcat (as per the izone docker page). Run

docker run --rm --name Tomcat -h tomcat \
-e PASS="admin" \
-p 8080:8080 \
-ti izone/arm:tomcat

You’ll may need to wait a while before the Tomcat server is up and running, but once it is simply use your browser to navigate to

http://<pi-zero-ip-address>:8080/

and you should see the Tomcat home page.

RabbitMQ in Docker

As part of a series of posts on running some core types of applications in Docker, its time to run up a message queue. Let’s try the Docker RabbitMQ container.

To run, simply use the following

docker run -d --hostname my-rabbit --name some-rabbit -p 5672:5672 -p 15672:15672 rabbitmq:3

If you’d like to use the RabbitMQ management tool, then run this instead (which runs RabbitMQ and the management tool)

docker run -d --hostname my-rabbit --name some-rabbit -p 5672:5672 -p 15672:15672 rabbitmq:3-management

Note: default login for the management UI is via a web browser using http://host:15672 obviously replacing the host with your server IP address. The default login and password are guest/guest.

From Visual Studio, create two test applications, one for sending messages and the second for receiving messages (mine are both console application) and using NuGet, add RabbitMQ.Client to them both. I’m going to just duplicate the code from the RabbitMQ tutorial here. So the application for sending messages should look like this

static void Main(string[] args)
{
   var factory = new ConnectionFactory
   {
      HostName = "localhost",
      Port = AmqpTcpEndpoint.UseDefaultPort
   };
   using (var connection = factory.CreateConnection())
   {
      using (var channel = connection.CreateModel())
      {
         channel.QueueDeclare("hello", false, false, false, null);

         var message = "Hello World!";
         var body = Encoding.UTF8.GetBytes(message);

         channel.BasicPublish(String.Empty, "hello", null,body);
         Console.WriteLine(" [x] Sent {0}", message);
      }
   }
   Console.ReadLine();
}

The receiver code looks like this

static void Main(string[] args)
{
   var factory = new ConnectionFactory
   {
      HostName = "localhost",
      Port = AmqpTcpEndpoint.UseDefaultPort
   };
   using (var connection = factory.CreateConnection())
   {
      using (var channel = connection.CreateModel())
      {
         channel.QueueDeclare("hello", false, false, false,null);

         var consumer = new EventingBasicConsumer(channel);
         consumer.Received += (model, ea) =>
         {
            var body = ea.Body;
            var message = Encoding.UTF8.GetString(body);
            Console.WriteLine(" [x] Received {0}", message);
         };
         channel.BasicConsume("hello", true, consumer);

         Console.WriteLine("Press [enter] to exit.");
         Console.ReadLine();
      }
   }
}

Obviously localhost should be replaced with the host name/ip address of the Docker server running RabbitMQ.

Redis service and client

Redis is an in memory store/cache, key/value store. Luckily there’s a Docker image for this.

Let’s run an instance of Redis via Docker on an Ubuntu server

docker run --name myredis -d -p 6379:6379 redis

Oh, how I love Docker (and ofcourse the community who create these images). This will run Redis and return immediately to your host’s command prompt (i.e. we do not go into the instance of Redis).

To run the redis client we’ll need to switch to the instance of the Docker container running Redis and then run the redis command line interface, thus

docker exec -it myredis bash
redis-cli

We’ll use this CLI later to view data in the cache.

C# client

There are several Redis client libraries available for .NET/C#, I’m going to go with ServiceStack.Redis, mainly because I’ve been using ServiceStack recently. So create a Console application, add the nuget packages for ServiceStack.Redis and now add the following code

public class Person
{
   public string FirstName { get; set; }
   public string LastName { get; set; }
}

class Program
{
   static void Main(string[] args)
   {
      var client = new RedisClient("redis://xxx.xxx.xxx.xxx:6379");
      client.Add("1234", new Person {FirstName = "Scooby", LastName = "Doo"});
      client.Save();
   }
}

Obviously change xxx.xxx.xxx.xxx to your server ip address.

This code will simply write the Person object to the store against the key 1234. If you have the redis-cli running then you can type

get 1234

this should result in the following result

"{\"FirstName\":\"Scooby\",\"LastName\":\"Doo\"}"

Ofcourse, we now need to use the ServiceStack.Redis client to read our data back. Just use this

var p = client.Get<Person>("1234");

Security

By default Redis has no security set up, hence we didn’t need to specify a user name and password. Obviously in a production environment we’d need to implement such security (or if using Redis via a cloud provider such as Azure).

For our instance we can secure Redis as a whole using the command AUTH. So from redis-cli run

CONFIG SET requirepass "password"
AUTH "password"

If you run AUTH “password” and get Err Client sent AUTH, but no password is set you’ll need the CONFIG line, otherwise the AUTH line should work fine. Our client application will need the following changes to the URL

var client = new RedisClient("redis://password@xxx.xxx.xxx.xxx:6379");

To remove the password (if you need to) simple type the following from the redis-cli

CONFIG SET requirepass ""

References

https://github.com/ServiceStack/ServiceStack.Redis

MongoDB revisited

As I’m busy setting up my Ubuntu server, I’m going to revisit a few topics that I’ve covered in the past, to see whether there are changes to working with various servers. Specifically I’ve gone Docker crazy and want to run these various server in Docker.

First up, let’s see what we need to do to get a basic MongoDB installation up and running and the C# client to access it (it seems some things have changes since I last did this).

Getting the server up and running

First off we’re going to get the Ubuntu server setup with an instance of MongoDB. So let’s get latest version on mongo for Docker

docker pull mongo:latest

this will simply download the latest version of the MongoDB but not run it. So our next step is to run the MongoDB Docker instance. By default the port MongoDB uses is 27017, but this isn’t available to the outside world. So we’re going to want to map this to a port accessible to our client machine(s). I’m going to use port 28000 (there’s no specific reason for this port choice). Run the following command from Ubuntu

docker run -p 28000:27017 --name my-mongo -d mongo

We’ve mapped MongoDB to the previously mentioned port and named the instance my-mongo. This will run MongoDB in the background. We can now look to write a simple C# client to access the instance.

Interactive Shell

Before we proceed to the client, we might wish to set-up users etc. in MongoDB and hence run its shell. Now running the following

docker exec -t my-mongo mongo

Didn’t quite work as expected, whilst I was placed inside the MongoDB shell, commands didn’t seem to run.

Note: This could be something I’m missing here, but when pressing enter, the shell seemed to think I was about to add another command.

To work with the shell I found it simpler to connect to the Docker instance using bash, i.e.

docker exec -t my-mongo bash

then run

mongo

to access the shell.

I’m not going to set up any users etc. at this point, we’ll just used the default setup.

Creating a simple client

Let’s fire up Visual Studio 2015 and create a console application. Then using NuGet add the MongoDB.Driver by MongoDB, Inc. Now add the following code to your Console application

public class Person
{
   public ObjectId Id { get; set; }
   public string FirstName { get; set; }
   public string LastName { get; set; }
   public int Age { get; set; }
}

class Program
{
static void Main(string[] args)
{
   var client = new MongoClient("mongodb://xxx.xxx.xxx.xxx:28000");
   var r = client.GetDatabase("MyDatabase");
   var collection = r.GetCollection<Person>("People");
   collection.InsertOne(new Person 
   { 
      FirstName = "Scooby", 
      LastName = "Doo", 
      Age = 27 
   });
}

Obviously replace the xxx.xxx.xxx.xxx with the IP address of your server (in my case my Ubuntu server box), the port obviously matches the port we exposed via Docker. You don’t need to “create” the database explicitly via the shell or a command, you can just run this code and it’ll create MyDatabase then the table People and then insert a record.

Did it work?

Hopefully your Console application just inserted a record. There should have been no timeout or other exception. Ofcourse we can use the Console application, for example

var client = new MongoClient("mongodb://xxx.xxx.xxx.xxx:28000");
var r = client.GetDatabase("MyDatabase");
var collection = r.GetCollection<Person>("People");
foreach (var p in collection.FindSync(_ => true).ToList())
{
   Console.WriteLine($"{p.FirstName} {p.LastName}");                
}

I’m using the synchronous methods to find and create the list, solely because my Console application is obviously pretty simple, but the MongoDB driver library offers Async versions of these methods as well.

The above code will write out Scooby Doo as the only entry in our DB, so all worked fine. How about we do the same thing using the shell.

If we now switch back to the server and if its not running, run the MongoDB shell as previously outlined. From the shell run the following

use MyDatabase
db.People.find()

We should now see a single entry

{ 
  "_id" : ObjectId("581d9c5065151e354837b8a5"), 
  "FirstName" : "Scooby", 
  "LastName" : "Doo", 
  "Age" : 27 
}

Just remember, we didn’t set this instance of MongoDB up to use a Docker Volume and hence when you remove the Docker instance the data will disappear.

So let’s quickly revisit the code to run Mongo DB within Docker and fix this. First off exit back to the server’s prompt (i.e. out of the Mongo shell and out of the Docker bash instance).

Now stop my-mongo using

docker stop my-mongo

You can restart mongo at this point using

docker start my-mongo

and your data will still exist, but if you run the following after stopping the mongo instance

docker rm my-mongo

and execute Mongo again the data will have gone. If we add a volume command to the command line argument, and so we will execute the following

docker run -p 28000:27017 -v mongodb:/data/mongodb --name my-mongo -d mongo

the inclusion of the /v will map the mongodb data (/data/mongodb) to the volume on the local machine named mongodb. By default this is created in /var/lib/docker/volumes, but ofcourse you could supply a path to an alternate location

Remember, at this point we’re still using default security (i.e. none), I will probably create a post on setting up mongo security in the near future

.NET Core

This should be a pretty short post, just to outline using .NET core on Linux/Ubuntu server via Docker.

We could look to install .NET core via apt-get, but I’ve found it so much simpler running up a Docker container with .NET core already implemented in, so let’s do that.

First up, we run

docker run -it microsoft/dotnet:latest

This is an official Microsoft owned container. The use of :latest means we should get the latest version each time we run this command. The -it switch switches us straight into the Docker instance when it’s started, I/e/ into bash.

Now this is great if you’re happy to lose any code within the Docker image when it’s removed, but if you want to link into your hosts file system it’s better to run

docker run -it -v /home/putridparrot/dev:/development microsoft/dotnet:latest

Where /home/putridparrot/dev on the left of the colon, is a folder on your host/server filesystem and maps to a folder inside the Docker instance which will be named development (i.e. it acts like a link/shortcut).

Now when you are within the Docker instance you can save files to the host (and vice/versa) and they’ll persist beyond the life of the Docker instance and also allow us a simply means of copying files from, say a Windows machine into the instance of dotnet on the Linux server.

And that literally is that.

But let’s write some code and run it to prove everything is working.

To be honest, you should go and look at Install for Windows (or one of the installs for Linux or Mac) as I’m pretty much going to recreate the documentation on running dotnet from these pages

To run the .NET core framework, this includes compiling code, we use the command

dotnet

Before we get going, .NET core is very much a preview/RC release, so this is the version I’m currently using (there’s no guarantee this will work the same way in a production release), running

dotnet --version

we get version 1.0.0-preview2-003131.

Let’s create the standard first application

Now, navigate to home and then make a directory for some source code

cd /home
mkdir HelloWorld
cd /HelloWorld

yes, we’re going to write the usual, Hello World application, but that’s simply because the dotnet command has a built in “new project” which generates this for us. So run

dotnet new

This creates two files, Program.cs looks like this

using System;

namespace ConsoleApplication
{
    public class Program
    {
        public static void Main(string[] args)
        {
            Console.WriteLine("Hello World!");
        }
    }
}

nothing particularly interesting here, i.e. its a standard Hello World implementation.

However a second file is created (which is a little more interesting), the project.json, which looks like this

{
  "version": "1.0.0-*",
  "buildOptions": {
    "debugType": "portable",
    "emitEntryPoint": true
  },
  "dependencies": {},
  "frameworks": {
    "netcoreapp1.0": {
      "dependencies": {
        "Microsoft.NETCore.App": {
          "type": "platform",
          "version": "1.0.1"
        }
      },
      "imports": "dnxcore50"
    }
  }
}

Now, we need to run the following from the same folder as the project (as it will use the project.json file)

dotnet restore

this will restore any packages required for the project to build. To build and run the program we use

dotnet run

What are the dotnet cmd options

Obviously you can run dotnet –help and find these out yourself, but just to give a quick overview, this is what you’ll see as a list of commands

Common Commands:
  new           Initialize a basic .NET project
  restore       Restore dependencies specified in the .NET project
  build         Builds a .NET project
  publish       Publishes a .NET project for deployment (including the runtime)
  run           Compiles and immediately executes a .NET project
  test          Runs unit tests using the test runner specified in the project
  pack          Creates a NuGet package

.NET Core in Visual Studio 2015

Visual Studio (2015) offers three .NET core specific project templates, Class Library, Console application and ASP.NET Core.

References

https://docs.microsoft.com/en-us/dotnet/articles/core/tutorials/using-with-xplat-cli
https://docs.asp.net/en/latest/getting-started.html
https://docs.microsoft.com/en-us/dotnet/articles/core/tools/project-json

Hosting a github like service on my Ubuntu server

I was thinking it would be cool to host some private git repositories on my Ubuntu server. That’s where Gogs comes in.

Sure I could have created a git repository “manually” but why bother when Gogs comes in a docker instance as well (and I love Docker).

Excellent documentation is available on the Gogs website or via Docker for Gogs on GitHub. But I will be recreating the bare essentials here so I know what worked for me.

First off, we’re going to need to create a Docker volume. For those unfamiliar with Docker, data is stored in the instance of Docker and therefore lost when it shuts down. To allow data to persist beyond the life of the Docker instance, we create a Docker volume.

So first up, I just used the default Docker volume location using

docker volume create --name gogs-data

On my Ubuntu installation, this folder is created within /var/lib/docker.

Next up we’ll simply run

docker run --name=gogs -p 10022:22 -p 10080:3000 -v gogs-data:/data gogs/gogs

This command will download/install an instance of gogs and it will be available on port 10080 of the server. The instance will connect it’s data through to our previously created volume using the -v switch.

From your chosen web browser, you can display the Gogs configuration web page using http://your-server:10080.

As I didn’t have MySQL or PostgresSQL set-up at the time of installation, I simply chose SQLite from the database options shown on the configuration page.

That’s pretty much it – you’ll need to create a user login for your account and then you can start creating repositories.

I’m not sure if I’m doing something wrong, but I noticed that when you want to clone a repository, Gogs says I should clone

http://localhost:3000/mark/MyApp.git

Obviously localhost is no use from a remote machine and also 3000 seems to be incorrect, and should be 10080 (as specified in the run command).

lookup index.docker.io no DNS servers error

I’ve been learning Docker lately and all was working well, then today I started seeing the following error lookup index.docker.io no DNS servers error when trying to pull a docker container from Docker Hub. Very strange as this worked fine previously.

For the record, I was able to use apt-get to update packages and I could ping the index.docker.io address, so I’m not sure what changed to make this break.

Anyway to solve the problem we can simply append dns-nameservers to the file /etc/network/interfaces

For example

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
address 192.x.x.x
netmask 255.255.255.0
gateway 192.x.x.1
dns-nameservers 8.8.8.8 8.8.4.4

In the above I’ve added the loopback and two google name servers to the interfaces file.

To activate these changes (without a reboot) just use

ifdown eth0 && ifup eth0

Getting started with Docker

I’ve been wanting to try out Docker for a while. Finally got my Ubuntu 14.04 server up and running, so now’s the time.

First off I installed docker as per the instruction on How to Install Docker on Ubuntu 14.04 LTS.

What version of Docker am I running?

Simple enough, just type docker version. Assuming the docker daemon is running you should see various bits of version information.

I’m currently running Client version 1.0.1

How to search for container images

We’re going to start by simply trying to find an existing images that we can pull onto our server, so typing docker search <name of image> will result in a list of images found with a match on the supplied image name.

For example docker search mongodb will return a list from the Docker hub of images with mongodbInstalling an image

Once we’ve found the image we want we need to “pull” it onto our machine using docker pull <name of image>

So let’s pull down the official mongodb image

docker pull mongo

This command will cause docker to download the current mongo image. Once completed you will not need to pull the image again it will be stored locally.

Hold on where are images stored locally?

Run docker info to see where the root directory for docker is, as well as information on the number of images and containers stored locally.

Runnning a container

Once we’ve pulled down an image we can run and application within the container, for example docker run <name of image> <command to run>

Let’s run our mongodb container by typing the following

  • docker run –name some-mongo -d mongo
  • docker run -it –link some-mongo:mongo –rm mongo sh -c ‘exec mongo “$MONGO_PORT_27017_TCP_ADDR:$MONGO_PORT_27017_TCP_PORT”‘

In the above some-mongo should be replaced with the name you want to use.

Note: These command lines will run mongodb in interactive mode, i.e. we will be placed into the mongodb shell.

Updating the container

An image itself may be based upon an OS, such as Ubuntu. So we can actually run commands on the operating system in the container, for example apt-get. Installing software into an OS container is cool but we will then want to persist such changes to the container.

Whilst the changes have been made, they are not yet persisted to the containter. To persist our changes we need to commit them.

First we need to find the ID of the container, for example running docker ps -l, then we save the changes using docker commit <ID> <new container name>.

You needn’t type the whole ID from the call to docker ps -l, the first three or four characters (assuming they’re unique) will suffice.

The value returned from the commit is the new image id.

Inspect

We can view more extensive information on a container by using docker inspect <ID>. Remember the ID can be found using docker ps.

Inspect allows is to see all sorts of information including the image’s IP address.

Pushing an image to the Docker Hub Registry

Once we’re happy with our image we can push it to the docker hub to allow others to share it. Simply use docker push <name of image>.

To find the images on your machine simply use docker images.

Removing all containers and all images

We can use docker’s ps command along with the docker rm command

docker rm $(docker ps -a -q)

to remove all containers and we can use the docker rmi command to remove images as follows

docker rmi $(docker images -q)