Monthly Archives: April 2018

Writing a custom JUnit test runner

We’re going to create a new JUnit test runner which will be a minimal runner, i.e. has the barest essentials to run some tests. It should be noted that JUnit includes an abstract class ParentRunner which actually gives us a better starting point, but I wanted to demonstrate a starting point for a test runner which might no adhere to the style used by JUnit.

Our test runner should extend the org.junit.runner.Runner and will contain two methods from the abstract Runner class and a public constructor is required which takes the single argument of type Class, here’s the code

import org.junit.runner.Description;
import org.junit.runner.Runner;
import org.junit.runner.notification.RunNotifier;

public class MinimalRunner extends Runner {

    public MinimalRunner(Class testClass) {
    }

    public Description getDescription() {
        return null;
    }

    public void run(RunNotifier runNotifier) {
    }
}

we’ll also need to add the dependency

<dependency>
   <groupId>junit</groupId>
   <artifactId>junit</artifactId>
   <version>4.12</version>
</dependency>

Before we move onto developing this into something more useful, to use our test runner on a Test class, we need to add the RunWith annotation to the class declaration, for example

import org.junit.runner.RunWith;

@RunWith(MinimalRunner.class)
public class MyTest {
}

Okay, back to the test runner. The getDescription method should return a description which ultimately makes up the tree we’d see when running our unit tests, so we’ll be wanting to return a parent/child relationship of descriptions where the parent is the test class name and it’s children are those methods marked with the Test annotation (we’ll assume children but no deeper, i.e. no grandchildren etc.).

Spoiler alert, we will be needing the Description objects again later so let’s cache them in readiness.

public class MinimalRunner extends Runner {

    private Class testClass;
    private HashMap<Method, Description>  methodDescriptions;

    public MinimalRunner(Class testClass) {
        this.testClass = testClass;
        methodDescriptions = new HashMap<>();
    }

    public Description getDescription() {
        Description description = 
           Description.createSuiteDescription(
              testClass.getName(), 
              testClass.getAnnotations());

        for(Method method : testClass.getMethods()) {
            Annotation annotation = 
               method.getAnnotation(Test.class);
            if(annotation != null) {
                Description methodDescription =
                   Description.createTestDescription(
                      testClass,
                      method.getName(), 
                      annotation);
                description.addChild(methodDescription);

                methodDescriptions.put(method, methodDescription);
            }
        }

        return description;
    }

    public void run(RunNotifier runNotifier) {
    }
}

In the above code we create the parent (or suite) description first and then locate all methods with the @Test annotation and create test descriptions for them. These are added to the parent description and along with the Method, to our cached methodDescriptions.

Note: that we’ve not written code to handle @Before, @After or @Ignore annotations, just to keep things simple.

Obviously we’ll need to add the following imports also to the above code

import org.junit.Test;
import java.lang.annotation.Annotation;
import java.lang.reflect.Method;
import java.util.HashMap;
// also need these two for the next bit of code
import org.junit.AssumptionViolatedException;
import org.junit.runner.notification.Failure;

Next up we need to actually run the tests and as you’ve probably worked out, this is where the run method comes in. There’s nothing particularly special here, we’re just going to run on a single thread through each method. Had we been handling @Before and @After then these methods would be called prior to the code in the following code’s forEach loop (but we’re keep this simple).

public void run(RunNotifier runNotifier) {

   try {
      Object instance = testClass.newInstance();

      methodDescriptions.forEach((method, description) ->
      {
         try {
            runNotifier.fireTestStarted(description);

            method.invoke(instance);

            runNotifier.fireTestFinished(description);
         }
         catch(AssumptionViolatedException e) {
            Failure failure = new Failure(description, e.getCause());
            runNotifier.fireTestAssumptionFailed(failure);
         }
         catch(Throwable e) {
            Failure failure = new Failure(description, e.getCause());
            runNotifier.fireTestFailure(failure);
         }
         finally {
            runNotifier.fireTestFinished(description);
         }
      });
   }
   catch(Exception e) {
      e.printStackTrace();
   }
}

In the code above we simply create an instance of the test class the loop through our previous cached methods invoking the @Test methods. The calls on the runNotifier object tell JUnit (and hence UI’s such as the IntelliJ test UI) which test has started running and whether it succeeded or failed. In the case of failure, the use of getCause() was added because otherwise (at least in my sample project) the exception showed information about the test runner code itself, which was superfluous to the actual test failure.

I’ve not added support for filtering or sortable capabilities within our code, to do this our MinimalRunner would also implement the Filterable interface for filtering and Sortable for sorting (within the org.junit.runner.manipulation package).

I’m not going to bother implementing this interface in this post as the IDE I use for Java (IntelliJ) handles this stuff for me anyway.

Code on GitHub

Code’s available on GitHub.

Using JMock

At some point we’re likely to require a mocking framework for our unit test code. Ofcourse there’s several java based frameworks. In this post I’m going to look into using JMock.

Setting up an example

Let’s start by looking at an interface, my old favourite a Calculator

public interface Calculator {
    double add(double a, double b);
}

Let’s now add some dependencies to the pom.xml

<dependencies>
   <dependency>
      <groupId>junit</groupId>
      <artifactId>junit</artifactId>
      <version>4.12</version>
      <scope>test</scope>
   </dependency>
   <dependency>
      <groupId>org.jmock</groupId>
      <artifactId>jmock-junit4</artifactId>
      <version>2.8.4</version>
      <scope>test</scope>
   </dependency>
</dependencies>

Using JMock

We need to start off by creating a Mockery object. This will be used to create the mocks as well as handle the three A’s, Arrange, Act and Assert. Let’s jump straight in an look at some code…

package com.putridparrot;

import org.jmock.Expectations;
import org.jmock.Mockery;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;

import static org.junit.Assert.assertEquals;

public class CalculatorTest {

    private Mockery mockery;

    @Before
    public void setUp() {
        mockery = new Mockery();
    }

    @After
    public void tearDown() {
        mockery.assertIsSatisfied();
    }

    @Test
    public void add() {

        final Calculator calc = mockery.mock(Calculator.class);

        mockery.checking(new Expectations()
        {
            {
                oneOf(calc).add(2.0, 4.0);
                will(returnValue(6.0));
            }
        });

        double result = calc.add(2, 4);
        assertEquals(6.0, result, 0);
    }
}

In the code above we’re creating the Mockery in the setup method and the assert in the teardown.

We then use the Mockery to create a mock of the interface Calculator which we would normally pass into some other class to use, but for simplicity I’ve simply demonstrated how we arrange the mock using mockery.checking along with the expectations. Then we act on the mock object by calling the add method which ofcourse will execute our arrange code.

Here’s an example of some code which demonstrates arranging values which will return a different value each time, so we’ll add the following to our Calculator interface

int random();

and now create the following test, which expects random to be called three times and arranges each return accordingly.

@Test
public void random() {
   final Calculator calc = mockery.mock(Calculator.class);

   mockery.checking(new Expectations()
   {
      {
         exactly(3)
            .of(calc)
            .random();

         will(onConsecutiveCalls(
            returnValue(4),
            returnValue(10),
            returnValue(42)
         ));
      }
   });

   assertEquals(4, calc.random());
   assertEquals(10, calc.random());
   assertEquals(42, calc.random());
}

Ofcourse there’s plenty more to JMock, but this should get you started.

Vert.x futures

In previous examples of implementations of AbstractVerticle classes I’ve used start and stop methods which take no arguments, there’s actually asynchronous versions of these methods which support the Vert.x Future class.

For example

public class FutureVerticle extends AbstractVerticle {
   @Override
   public void start(Future<Void> future) {
   }

   @Override
   public void stop(Future<Void> future) {
   }
}

Let’s take a look at how our start method might change to use futures.

@Override
public void start(Future<Void> future) {

   // routing and/or initialization code

   vertx.createHttpServer()
      .requestHandler(router::accept)
      .listen(port, l ->
      {
         if(l.succeeded()) {
            future.succeeded();
         }
         else {
            future.fail(l.cause());
         }
      });
)

In this example we simply set the state of the future to success or failure and in the case of the failure supply a Throwable as the argument to the fail method.

Using the Future in our own code

Obviously the Future class may be used outside of the start and stop methods, so let’s take a look at creating and using a Future.

To create a future simply use

Future<Record> f = Future.future();

in this case we’re creating a Future which takes a Record. We can now supply our own AsyncResult handler to handle the future on completion, i.e.

Future<Record> f = Future.future();

f.setHandler(ar ->
{
   if(r.succeeded() {
      // do something with result
   }
});

Many of the Vertx methods (like listen in the earlier code) supply overloads with an AsyncResult callback. We can pass a future as a callback using the method completer and supply a handler via the future. For example

Future<HttpServer> f = Future.future();
f.setHandler(l ->
{
   if(l.succeeded()) {
      future.succeeded();
   }
   else {
      future.fail(l.cause());
   }
});

vertx.createHttpServer()
   .requestHandler(router::accept)
   .listen(port, f.completer());

Shell scripting (Linux)

The shell script file

By default we name a script file with the .sh extension and the first line is usually (although not strictly required) to be one of the following

#!/bin/sh

OR

#!/bin/bash

The use of sh tells Linux we want to run the script with the default shell script . Note that this might be dash, bash, bourne or any other shell available, hence when using sh the developer of the script needs to be aware that they cannot expect bash (for example) capabilities to exist and therefore if bash specific code exists within the script the #!/bin/bash line should be used.

Note: #! is known as the sha-bang

What shell am I running?

You can use $SHELL in your scripts or from the command line, for example

echo $SHELL

Making the script executable

chmod 777 myscript.sh

Comments

Comments are denoted by #. Anything after the # until a new line, will be seen as a comment, i.e.

echo "Some text" # write to stdout "Some text"

Variables

We can create variables, which are case-sensitive, like this

VARIABLE1="Hello"
variable2="World"

echo $VARIABLE1 $variable2

Note: You should not have spaces between the = operator or the command may not found. So for example this will fail VARIABLE1 = “Hello”

Whilst we can use variables that are not strings, underneath they’re stored as strings and only converted to numerical values when used with numerical functions etc.

So for example

i=0
$i=$i+1

will fail with 0=0+1: command not found. To increment a variable (for example) we need to use the following syntax

let "i=i+1"
#OR
i=$((i+1))

we can also use postfix operators, i.e. ++ or += such as

let "i++"
#OR
((i++))
#OR
let "i+=1"
#OR
((i+=1))

We can also create arrays using the following syntax

a=("a" "b" "c")

and an example of indexing into this array is as follows

echo "Array element at index 1 is ${array[1]}"
# outputs Array element at index 1 is b

We can also remove or unset a variables like this

unset i

Logic operations

IF, THEN, ELSE, ELIF…

As our scripts become more capable/complex it’s likely we’ll want to use some logic and branching code, i.e. IF, THEN, ELSE, ELIF code. Let’s look at an example of IF, THEN syntax

if [ $i = 6 ] 
then 
   echo "i is correctly set to 6" 
fi

Note: after the space after the [ and before ] without this the script will error with command not found.

The [ ] syntax is the way you’ll often see this type of operation, from my understanding this is actual an alternate syntax to test i=6, so for example

test i=6; echo "i is correctly set to 6"

Note: the example above shows the test on a single line, in this case the ; is used to denote the end of the line.

We can use = or -eq along with less than, greater than etc. however these standard operators are to be set to use a string comparisons, i.e. we do not use <, instead we use -lt for non-strings, like wise = will do a string comparisons whereas -eq will handle numerical comparisons.

We can also use ELSE for example

if [ ! -d "BAK" ]
then
   echo "BAK does not exist"
else   
   echo "BAK exists"
fi

and finally ELIF

if [ -d "BAK" ]
then
   echo "BAK exists"
elif [ -d "BACK" ]
   echo "BACK exists"
fi

Checkout If Statements! which has a list of the operators in more depth.

[[ ]] vs [ ]

The [ ] is actually just an alias for test as mentioned above. BASH and some other shells also support [[ ]] syntax which is more powerful. See What is the difference between test, [ and [[ ? for more information.

Case (switch)

Extending the IF, THEN, ELSE, ELIF we also have a switch style comparison capability, for example

case $response in
   y|Y) echo "Executing script" ;;
   *) exit ;;
esac

The syntax for y|Y) is pattern matching and ) terminates the pattern. This is followed by one or more statements to be executed followed by the ;; terminator. The *) means match against anything else (or default condition). We then terminate the case block with esac. So in this example we’ll output “Executing script” if the response variable is either y or Y.

Loops

We can run commands in loops such as for, while and until.

While and Until are very similar except that while keeps looping whilst a condition is true whereas until loops, until a condition is true. Here’s a couple of simple examples

i=0
while [ $i -lt 10 ]
do
   echo $i
   ((i++))
done

until [ $i -lt 0 ]
do
   echo $i
   ((i--))
done

for loops using a similar syntax again, except they use the in keyword, for example

array=("a" "b" "c")
for item in ${array[@]}
do
  echo $item
done

This example demonstrates looping through an array, but we can also loop through items returned by shell commands, for example

for item in $(ls -a)
do
   echo $item
done

In this example we’re looping through the results of the command ls -a. Although a better solution to this might be

for item in ${PWD}/*
do
   echo $item
done

The ls version returns multiple items for a file name with spaces, so not too useful if we want each file name including spaces.

Here’s a final example using using the back tick (`) which can be used to enclose commands, for example in this instance we execute the command seq 1 10

for item in `seq 1 10`;
do
   echo $item
done   

Passing arguments to your shell script

Arguments are passed into your script via the command line as you’d normal do, i.e. in this example my shell script (myscript.sh) takes two arguments Hello and World

./myscript.sh Hello World 

To reference the arguments in the script we simply use $1 and $2. i.e.

echo $1 # Should be Hello
echo $2 # Should be World

There also there’s also the $@ which denotes all arguments, i.e.

echo "$@" 

Will output all the arguments passed into the script or function.

Functions

We can create functions inside our shell scripts and/or include other script files which have functions etc. within.

You need to declare the function before usage and writing a function is pretty simple, i.e.

say_hello()
{
    echo "say_hello called"
}

say_hello

To include arguments/parameters we use the same system as passing arguments via the command line, so for example

say_something()
{
    echo "say_something says $1 $2"
}

say_something Hello World
# outputs say_something says Hello World

here we see the arguments are turned into the $1 and $2 variables, but of course local to our function.

STDIN/STDOUT/STDERR

We’ve already seen that echo is the equivalent of output to STDOUT in it’s default usage, although in can be used to output to STDERR, see Illustrated Redirection Tutorial.

We can use read to read input from the user/command line via STDIN.

In it’s most basic use we can write the following

read input 

Where input is a variable name.

We can also use it in slightly more powerful ways, such as

read -n1 -p "Are you sure you wish to continue (y/n)?" input

In this case we read a single character (-n1) with the prompt (-p) “Are you sure you wish to continue (y/n)?” into the variable named input.

The read function can also be used to read data from a file by using a file descriptor and the argument -u.

References

Loops for, while and until

ZooKeeper

Why do we need ZooKeeper

The first question I had to ask myself was, why do we need ZooKeeper? After all I could store/publish the host/port etc. to a Redis server, a database or just a plain centrally located web service (which for all I care could store the information to a simple file).

ZooKeeper has been designed specifically for handling configuration and name registry functionality in a distributed and clustered environment, hence comes with more advanced features to ensure consistency of data along with capabilities to handle cluster management of the servers.

ZooKeeper in Docker

I’m using ZooKeeper within docker.

To run a ZooKeeper instance within a Docker container, simply use

docker run --name zookeeper1 --restart always -d zookeeper

My first instance is named zookeeper1. This command will run a server instance of ZooKeeper.

We may also need to attach to the service with a client, we can run the following command

docker run -it --rm --link zookeeper1:zookeeper zookeeper zkCli.sh -server zookeeper

Ensure the name of the Docker instance matches the name you assigned to the server.

Client commands

  • create We can create a path, /root/sub format, to our data, for example create /services “hello”. Note: it seems that to create child nodes we cannot just type create /services/hello-service “hello”, we need to first create the root node then the child node.
  • ls We can list nodes by typing ls / or using root and/child nodes, for example ls /services/hello-service. If nodes exist below the listed node the output will be [node-name], so for example ls /services will result in [hello-service]. When no child nodes exists we’ll get [].
  • get We can get any data stored at the end of a node, so for example get /services/hello-service will display the data stored in the node along with data such as the date/time the data was stored, it’s size etc.
  • rmr We can recursively remove nodes (i.e. remove a node and all it’s children) using rmr. For example rmr /services.
  • delete We can delete an end node using delete but this will not work on a node which has children (i.e. it works only on empty nodes).
  • connect We can connect the client to a running instance of a ZooKeeper server using connect, i.e. connect 172.17.0.2:2181
  • quit Exists the client.

There are other client commands, but these are probably the main one’s from the client run help to see a full list of commands.

Some command have a [watch] options, which can be enabled by either supplying 1 or true as the last argument to the relevant commands.

Useful References

ZooKeeper Usage 1 – 5