Getting a CI server setup using TeamCity

We all use CI, right ? I’d even prefer to have a CI server setup at home for my own projects to at least ensure I’ve not dome anything silly to tie my code to my machine but also to ensure that I can easily recreate the build requirements of my project.

I thought it’d be cool to get TeamCity up and running on my Linux server and (as is usual for me at the moment) I wanted it running in Docker. Luckily there’s an official build on Docker Hub.

Also see TeamCity on Docker Hub – it’s official now!

So first up we need to setup some directories on the host for TeamCity to store data and logs through to (otherwise shutting down Docker will lose our projects).

First off, let’s get the Docker image for TeamCity, run

docker pull jetbrains/teamcity-server

Next create the following directories (or similar wherever you prefer)


Then fun the following

docker run -it --name teamcity-server  \
    -v ~/teamcity/data:/data/teamcity_server/datadir \
    -v ~/teamcity/logs:/opt/teamcity/logs  \
    -p 8111:8111 \

In the above we’ll run an instance of TeamCity named teamcity-server mapping the host directories we create to the datadir and logs of TeamCity. We’ll also map the host port 8111 to the TeamCity port 8111 (host being the first of the two in the above command).

Now if you use your preferred browser to access


You’ll be asked a couple of questions for TeamCity to set up the datadir and DB. I just used the defaults. After reading and accepting the license agreement you’ll be asked to create a user name and password. Finally supply your name/email address etc. and save the changes.

Setting up a build agent

From the Agents page you can click the link Install Build Agents and either get the zip or msi and decompress the zip to a given folder or run the MSI. I’ve simply unzipped the build agent.

We’ll need to create a file before we can run the agent. This file should be in the conf folder of the agent.

Change the serverUrl to your TeamCity server (and anything else you might want to change).

Now run the following on the build agent machine (I’m using Windows for the agent)

agent.bat start

Finally click the TeamCity web page’s Unauthorized link in the Agents section and Authorize the agent. If you’re using the free version of TeamCity you can have three build agents.

Writing our Play application from scratch

There’s no doubt it’s much better to use a template or in this case a seed (as outlined in my previous post Starting out with the Playframework (in IntelliJ)) set of code to get us up and running with any application, but I like to know what’s going on with my code, I’m not mad on just leaving it as “magic happens”.

So here’s me, reverse engineering the Play seed and creating my own code from scratch. You should be able to work through each step and at the end, you’ll end up with a single page “Hello World” app.

Lets get all the bits in place…

Creating the Project

From IntelliJ

  • Go to File | New Project
  • Select Scala | SBT
  • Name your project
  • Ensure correct Java and Scala versions selected
  • Press Finish

Add support for the Play plugin

Select the project node in the package/project (sources root) treeview and right mouse click on it, select new file and name the file plugins.sbt, place the following into it

addSbtPlugin("" % "sbt-plugin" % "2.5.14")

this will add the Play plugins

Edit the build.sbt

Now open build.sbt and add the following below the scalaVersion line

lazy val root = (project in file(".")).enablePlugins(PlayScala)

libraryDependencies += filters
libraryDependencies += "" %% "scalatestplus-play" % "2.0.0" % Test

Note: I had to change the scalaVersion to 2.11.11 to get this code to work, obviously it could be a versioning issue on my part, otherwise I got unresolved dependencies.

Click Enable Auto-Import

Add a gradle build file

Add a new file at the same level as the build.sbt and name it build.gradle. Add the following

plugins {
    id 'play'
    id 'idea'

task wrapper(type: Wrapper) {
    gradleVersion = '3.1'

repositories {
    maven {
        name "typesafe-maven-release"
        url ""
    ivy {
        name "typesafe-ivy-release"
        url ""
        layout "ivy"

def playVersion = '2.5.14'
def scalaVersion = '2.12.2'

model {
    components {
        play {
            platform play: playVersion, scala: scalaVersion, java: '1.8'
            injectedRoutesGenerator = true

            sources {
                twirlTemplates {
                    defaultImports = TwirlImports.SCALA

dependencies {
    ['filters-helpers', 'play-logback'].each { playModule ->
        play "${playModule}_$scalaVersion:$playVersion"

Configuration folder

Now Add a conf folder at the same level as the project folder (i.e. just select the root, right mouse click, select new directory and name it conf). Select the conf folder, right mouse click and select Mark Directory As… and select Unmark as Resource Root.

Right mouse click on the conf directory, select New File and name it routes (it’s just a text file). Place the following in the file

GET   /   controllers.IndexController.index

Add another file to conf named application.conf. We’re not actually putting anything in this file.

Create the app folder

Now, again at the root level, create another directory named app and Unmark as Source Root. In this directory add a controllers directory and views.

In app, add a new file named Filters.scala and add the following to it

import javax.inject.Inject

import play.api.http.DefaultHttpFilters

import play.filters.csrf.CSRFFilter
import play.filters.headers.SecurityHeadersFilter
import play.filters.hosts.AllowedHostsFilter

class Filters @Inject() (
   csrfFilter: CSRFFilter,
   allowedHostsFilter: AllowedHostsFilter,
   securityHeadersFilter: SecurityHeadersFilter
   ) extends DefaultHttpFilters(

Add the controllers and views

Okay, before we’ll actually see anything of use we need controllers and views, but in essence at this point you can create a configuration (SBT Task) with the Task run and should be able to run the http server up and see an error as it cannot find the IndexController (at least this gives us the feeling we’re almost there).

Now in the app/controllers folder add a new file named IndexController.scala and place the following code in it

package controllers

import javax.inject._
import play.api._
import play.api.mvc._

class IndexController @Inject() extends Controller {
   def index = Action { implicit request =>

and now we need the index view so in app/views add an index.scala.html file and a main.scala.html (this will be our main entry point and the index maps to out IndexController). So the main file should look like this

@(title: String)(content: Html)

<!DOCTYPE html>
<html lang="en">

and index.scala.html should look like this


@main("Hello World") {
<h1>Hello World</h1>

Note: the @main passes Hello World through to main.scala.html and that file creates the equivalent of an input argument @(title: String) and (content: Html) the title is what’s passed from index.scala.html in the @main argument.

Now run up the application and using your browser check http://<ip_address>:9000 and you should see the results of your index.scala.html displayed – “Hello World”.

You might feel a little familiar with the @ commands in the html files – these are template commands which the Play Template Engine provides – they’re similar to the likes of Razor and other templating engines.

So for example we might take the request (passed into our IndexController and inject into out html template like this

def index = Action { implicit request =>
  Ok(views.html.index(name = request.rawQueryString));

and in the index.scala.html

@(name: String)

@main("Hello World"){
<h1>Hello @name</h1>

Now if we navigate to this http://localhost:9000/?Parrot in our browser, we should see Hello Parrot displayed.

Next steps

Unlike the seed code, I removed all CSS, JavaScript etc. In the seed application off of root we have a public directory with public/images, public/javascripts and public/stylesheet. To make these folders available to our *.scala.html files, we need to add a route to the conf/routes file, for example

GET     /assets/*file               controllers.Assets.versioned(path="/public", file: Asset)

Now, in our *scala.html files we can access these aseets using code such as


here’s the seed main.scala.html file to demonstrate including stylesheets, images and scripts

@(title: String)(content: Html)

<!DOCTYPE html>
<html lang="en">
        <link rel="stylesheet" media="screen" href="@routes.Assets.versioned("stylesheets/main.css")">
        <link rel="shortcut icon" type="image/png" href="@routes.Assets.versioned("images/favicon.png")">

      <script src="@routes.Assets.versioned("javascripts/main.js")" type="text/javascript"></script>


Obviously, once we really get going with our code we’ll probably want to start logging interactions. Play comes with a default logger built in which is as simple to use as this

  • import play.api.Logger
  • Logger.debug(“Some String”)

Starting out with the Playframework (in IntelliJ)

Getting a seed application installed

I couldn’t find a “how to” for setting up play from scratch but instead it seems best to download a seed project from the Play Starter Projects.

Select the Play Scala Starter Example and download it – unzip to a folder and now you have a bare bones play application.

Importing the seed application into IntelliJ

  • From File, select New Project
  • Select Scala then SBT
  • Now select the folder where your seed project is
  • Build the project

If you get this error message object index is not a member of package views.html then Select View | Tools Windows | Terminal (or Alt+F12) and a terminal window will open, now run the following

  • sbt clean
  • sbt compile

See this post on this problem “object index is not a member of package views.html” when opening scala play project in scala ide”.

Cleaning then compiling seemed to work for me.

Creating a Run configuration

You may find that if you click Run, the only option is “All in root” and from this you might find IntelliJ tries to run some tests.

We need to create a new configuration to run play via sbt.

See Setting up your preferred IDE, steps recreated from this post below.

  • Select Run | Edit Configuration
  • Press the + button
  • Select SBT Task
  • Name you’re task – mine’s Run Play, simple enough
  • In the Tasks edit box type run
  • Press OK

Now when you want to run the application use this configuration and sbt run will get executed. Now you can go to http://locahost:9000 and see your running app.
Play Tutorials

Promises in JavaScript/TypeScript

Promises, are analogous to futures or tasks (if you background is C#) and are used for asynchronous code.

I’m using TypeScript at the moment (as part of learning Angular 2) but I’ll try to list code etc. in both JavaScript and TypeScript, solely to demonstrate the syntax. The underlying functionality will be exactly the same as (of course) TypeScript transpiles to JavaScript anyway.

The syntax for a promise in JavaScript looks like this

let promise = new Promise((resolve, reject) {
   // carry out some async task
   // then resolve or reject
   // i.e. resolve(result);
   // and/or reject("Failed");

As you can see in the above code, we can (in essence) return a success, with resolve or a failure, with reject.

In some situations we might simply wish to immediately resolve or reject without actually executing any asynchronous code.

In such situations we can use the methods Promises.resolve and/or Promise.reject method calls, i.e.

// in JavaScript
function getData() {
   return Promise.resolve(data);
   // or 
   return Promise.reject("Cannot connect");

// in TypeScript
getData(): Promise<MyDataType> {
   return Promise.resolve(data);
   // or
   return Promise.reject("Connot connect);

As you can see the difference between TypeScript and JavaScript (as one might expect) is the strong type checking/expectations.

Using the results from a Promise

As a promise is potentially going to be taking some time to complete we need a way to handle continuations, i.e. what happens when it completes.

In C# with have ContinueWith, in JavaScript we have then, hence our code having received a Promise might look like this

let promise = getData();

promise.then(result => {
   // do something with the result
}).catch(reason => {
   // failure, so something with failure

There are other Promise methods, see promise in JavaScript but this should get us up and running with the basics.

Turning my Raspberry Pi Zero W into a Tomcat server

I just got hold of a Raspberry Pi Zero W and decided it’d be cool/fun to set it up as a Tomcat server.


I am (as some other posts might show) a bit of a fan of using Docker (although still a novice), so I went the same route with the Pi.

As per the post Docker comes to Raspberry Pi, run the following from you Pi’s shell

curl -sSL | sh

Next, add your username to the docker group (I’m using the standard pi user)

sudo usermod -aG docker pi

Pull Tomcat for Docker

Don’t forget, the Raspberry Pi uses an ARM processor, so whilst Docker can help in deploying many things, the image still needs to have been built on the ARM processor. Hence just trying to pull Tomcat will fail with a message such as

exec user process caused “exec format error”

So to install Tomcat use the izone image

docker pull izone/arm:tomcat

Let’s run Tomcat

To run Tomcat (as per the izone docker page). Run

docker run --rm --name Tomcat -h tomcat \
-e PASS="admin" \
-p 8080:8080 \
-ti izone/arm:tomcat

You’ll may need to wait a while before the Tomcat server is up and running, but once it is simply use your browser to navigate to


and you should see the Tomcat home page.

Lifecycle hooks in Angular 2

If you haven’t already read this post LIFECYCLE HOOKS, I would highly recommend you go and read that first.

This is a really short post on just getting up and running with lifecycle hooks.

What are lifecycle hooks?

Angular 2 offers ways for our class/components to be called when certain key parts of a lifecycle workflow occur. The most obvious would be after creation, some form of initialization phase and ofcourse conversely when a class/component is cleaned up and potentially disposed of.

These are not the only lifecycle hooks, but this post is not mean’t to replicate everything in Angular 2’s documentation, but instead highlight how we use the hooks in our code.

How do we use a lifecycle hook

Let’s look at implementing a component with the two (probabaly) most used hooks, OnInit and OnDestroy, as these are obviously especially useful in situations where, maybe state needs to be loaded and stored.

As usual, we need to import the two interfaces, hence we need the line

import { OnInit, OnDestroy } from '@angular/core';

OnInit and OnDestroy look like this

export interface OnInit {
   ngOnInit() : void;

export interface OnDestroy {
   ngOnDestroy(): void;

Our component would then implement these interfaces, thus

export class DetailsComponent 
      implements OnInit, OnDestroy {

   ngOnInit(): void {

   ngOnDestroy(): void {

and that’s all there is to it.

Note: In the above I said we need to import the interfaces. In reality this is not true, interfaces are optional (as they’re really a TypeScript construct to aid in type checking etc.). What Angular 2 is really looking for is the specially named methods, i.e. ngOnInit and ngOnDestroy in our case.

See LIFECYCLE HOOKS or more specifically the section on Lifecycle sequence for an understand when different parts of the life cycle hooks get called.

For completeness, I’ll list the same here, but without all the descriptions.

  • ngOnChanges
  • ngOnInit
  • ngDoCheck
  • ngAfterContentInit
  • ngAfterContentChecked
  • ngAfterViewInit
  • ngAfterViewChecked
  • ngOnDestroy

Service Injection with Angular 2

Note: This post is based heavily on the Angular 2 tutorial, Services, hopefully I can add something useful to this.

We all know what dependency injection is about, right?

Let’s see how we create and inject services using Angular 2.

Naming convention

With components, the convention is to create a TypeScript component class suffixed with the work Component, hence our detail component class would be named DetailComponent and likewise the convention with regards to the file naming is to name file detail.component.ts (all lower case).

We use a similar convention for services. The TypeScript class might be named MyDataService therefore our file would be my-data.service.ts

Note: the hyphen between word boundaries and ofcourse replacing component with service.

Creating our service

Let’s create a simple data service. As per our naming convention, create a file named my-data.service.ts and here’s the code

import { Injectable } from '@angular/core';

export class MyDataService {
   // methods etc. omitted

To quote the Angular 2 documentation

The @Injectable() decorator tells TypeScript to emit metadata about the service. The metadata specifies that Angular may need to inject other dependencies into this service.

Using the service

In the code that uses the service we still need to import the service (as obviously we need a reference to the type) but instead of our component/code creating the service, we leave this to Angular 2. So the usual would be to create a constructor and allow Angular 2 to inject our service via the constructor. For example here’s the bare bones for a DetailsComponent that uses the previously implemented service

// other imports omitted
import { MyDataService } from './my-data.service';

   // selector etc. 
   providers: [MyDataService];
export class DetailsComponent {
   constructor(private myDataService: MyDataService) {

Notice we also need to register our service in the providers array either within the component or within the app.modules.ts inside the @NgModule.

If the service is registered with providers in a component, then an instance is created when the component is created (and is available for any child components), whereas registering within @NgModule would be more like creating a singleton of the service as the service would be created when the module is created and then available to all components.

Creating an Angular 2 component

An Angular 2 component (written using TypeScript) is a class with the @Component decorator/attribute.

For example

import { Component } from '@angular/core';

   // metadata properties
export class MyComponent {
   // methods, fields etc.

In it’s simplest form we might just supply an inline template to the metadata properties, to define the component’s HTML. Although we can create multiline HTML encased in back ticks `<multiple line HTML>`. We can also store the HTML in a separate file and reference it using templateUrl. Let’s look at some examples.

Single line HTML

   template: '<p>Hello</p>'
// class definition

Multiline HTML (using the backtick)

The backtick ` is used for multiline strings.

   template: `
// class definition

External Template HTML

   templateUrl: 'my-template.html'
// class definition

Ofcourse, the equivalent of an HTML component wouldn’t be so useful if we couldn’t also define styles. So, as you’d expect we also have the metadata property for style that works in the same way as the template.

The main thing to note is that the stylesUrls is plural and expects and array of styles, i.e.

  selector: 'my-app',
  templateUrl: 'my-app.html',
  styleUrls: ['my-app1.css', 'my-app2.css']
// class definition

Referring to class fields

At this point, the class itself might seem a little pointless, but by adding fields etc. to it we can reference these from the @Component template itself.

For example

   template: '<p>{{greeting}}</p>'
export class MyComponent {
   greeting: string = 'Hello';

The {{ }} is Angular 2’s expression syntax, meaning it can contain code to access fields, or actual expressions, such as 2 + 3 or method calls such as max(a, b).

My first Fody ModuleWeaver

I’ve been wanting to look into Fody for a while now, I find it a little annoying when I have to pollute code that actually is a functional part of an application with a mass of boilerplate code, which could be automatically generated. Obviously we can look at T4 templates, partial classes and other ways to achieve such things, but a nice AOP style way is to “code weave” the boiler plate at compile time.

Note: To really get the benefit of Fody you need to understand Mono.Cecil which allows us a simpler way to write our IL. This is outside the scope of this post.

I’m not intending to go too in depth with Fody in this post, but I am going to describe creating a really pointless ModuleWeaver (the class/code which weaves in our new IL code) to add a new type to your assembly as a great starting point for more interesting/impressive code.

I’m going to show how to develop some code into a separate solution and use from a Test application (which sounds easy, and is once you know the expectations of Fody, but took me a while to find those expectations due to some tutorials being out of date) and I’ll also show how to develop a ModuleWeaver within an application’s solution.

Let’s create our first weaver

  • Create a class library project named TestWeaver.Fody.
    Note: The .Fody part is important as Fody will search for DLL’s using *.Fody.DLL
  • Add NuGet package FodyCecil
  • Create/rename you Class1 as/to ModuleWeaver.cs
  • Include the following code in the ModuleWeaver class
    public class ModuleWeaver
       public ModuleDefinition ModuleDefinition { get; set; }
       public void Execute()
             new TypeDefinition("TestNamespace", "TestType",

Note: The basic source for Execute is taken from ModuleWeaver.

Creating a test application

  • Create a console application (we’re not actually going to write any console code for this, so we could have used a different project type)
  • Add NuGet packages Fody (and this should add FodyCecil)
  • Change the FodyWeavers.xml to look like this
    <?xml version="1.0" encoding="utf-8"?>
      <TestWeaver />

Deploying our weaver assembly

Okay, now if you build this ofcourse it will fail, we need to deploy the previously created TestWeaver.Fody somewhere. As we’re not using NuGet to deploy it and not including it in the solution, we have to place the DLL in a specific location so Fody knows where to find it.

Note: Remember, these DLL’s are solely for code weaving and not required in the deployed application, hence we do not need these files to go into the bin folder and we do not need to reference any *.Fody.DLL assemblies as they’re just used by Fody to change our IL (unless you’ve put attributes or the likes in the DLL – best practise would be to have these in a separate DLL which is referenced).

Add a folder named Tools to the to your solutions folder. i.e. SolutionDir/Tools.

Drop the files from your TestWeaver.Fody into this folder and now try to rebuild our test application. Now Fody should use you waver DLL to change the IL. If you open the application assembly in ILSpy (or the likes), you should see Fody (via our TestWeaver) created the TestNamespace and TestType we defined in our ModuleWeaver.

Including our Weaver in our solution

If you’ve covered the steps above, now delete the SolutionDir/Tools folder as we’re going to instead create our Weaver in a project within the test application solution.

In this case

  • Create a class library named Weavers in our test application
  • Create a class/rename the Class1 to TestWeaver.cs
  • Add this code to TestWeaver
    public class TestWeaver
       public ModuleDefinition ModuleDefinition { get; set; }
       public void Execute()
             new TypeDefinition("NewTestNamespace", "NewTestType",

    Note: I’ve renamed the namespace and type within the generated IL just to ensure we notice the difference in ILSpy

Notice (again) that we do not reference this project, but we will need the project to be built before our test application. So select the solution and right mouse click, select Project Build Order and you’re probably see Weavers listed after the test application, hence select the Dependencies tab, ensure your test application project is select in the Projects drop down and tick Depends On Weavers.

Now rebuild, inspect the resultant assembly using ILSpy. The namespace and type added should have their names changes as we’re now generating our IL code via the solution.

Debugging your Weaver

Obviously, as the weaving takes place as part of the build, we’re going to need to attach our debugger to a build or run the build via a debugger. Initially I simply placed a Debugger.Break() in my Execute method on my weaver module and clicked the build button. This worked the first time but not subsequent times and required the weaver to be in the same solution as the test application, so we’d be best to run msbuild directly from our project and debug via that application.

Here’s a comprehensive post on debugging msbuild – Debugging a Fody Weaver (Plugin) and/or debugging MSBuild.

We can also add logging to our application via some of the other possible insertion points in our Module Weaver, foe example, take a look at ModuleWeaver and you’ll see the “option members” include LogWarning, LogError, LogInfo (and others) which allows us to output log information during the build.


Simple AOP with Fody
Fody on github
Creating a Fody Add-in

Interacting with SOAP headers using CXF

Sometimes you might want to interact with data being passed over SOAP within the SOAP headers, for example this is a technique used to pass security tokens or user information etc.

CXF comes with quite a few “insertion points” whereby we can insert our code into the workflow of WSDL creation, SOAP calls etc. Here we’ll just look at the specifics of intercepting the SOAP call and extracting the header (ofcourse the reverse can also be implemented, whereby we intercept an outward bound call and insert a SOAP header item, but that’s for the reader to investigate).

I’m only going to cover implementing this in code, but obviously this can also be setup via Spring configuration also.

To add an interceptor to our JaxWsServerFactoryBean, we do the following

JaxWsServerFactoryBean factory = new JaxWsServerFactoryBean();
// set up the bean, address etc.

org.apache.cxf.endpoint.Server server = factory.create();
server.getEndpoint().getInInterceptors().add(new SoapInterceptor());

Now let’s look at the SoapInterceptor

public class SoapInterceptor extends AbstractSoapInterceptor {
    public static final String SECURITY_TOKEN_ELEMENT = "securityToken";

    public SoapInterceptor() {
    public void handleMessage(SoapMessage message) throws Fault {
        String securityToken = getTokenFromHeader(message);
        // do something with the token, maybe save in a context

    private String getTokenFromHeader(SoapMessage message) {
        String securityToken = null;
        try {
            List<Header> list = message.getHeaders();
            for(Header h : list) {
                if(h.getName().getLocalPart() == SECURITY_TOKEN_ELEMENT) {
                    Element token = (Element)h.getObject();
                    if(token != null) {
                        securityToken = token.getTextContent().toString();
        } catch (RuntimeException e) {
            throw new JAXRPCException("Invalid User", e);
        } catch (Exception e) {
            throw new JAXRPCException("Security Token failure ", e);
        return securityToken;