Category Archives: Java

Creating a Java Web Application with IntelliJ

Creating our project

  • Choose File | New | Project
  • Select Java Enterprise
  • Then tick the Web Application

If you have an application server setup, select or add it using New, I’m going to ultimately add an embedded Tomcat container, so leaving this blank.

Finally give the project a name, i.e. MyWebApp.

Adding a Maven pom.xml

I want to use Maven to import packages, so add a pom.xml to the project root add then supply the bare bones (as follows)

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>MyWebApp</groupId>
    <artifactId>MyWebApp</artifactId>
    <packaging>war</packaging>
    <name />
    <version>MyWebApp.0.0.1-SNAPSHOT</version>
    <description />

</project>

In IntelliJ, select the pom.xml, right mouse click and select Add as Maven project.

Next, we want to tell Maven how to compile and generate our war, so add the following after the description tag in the pom.xml

<build>
   <sourceDirectory>${basedir}/src</sourceDirectory>
   <outputDirectory>${basedir}/web/WEB-INF/classes</outputDirectory>
   <resources>
      <resource>
         <directory>${basedir}/src</directory>
         <excludes>
            <exclude>**/*.java</exclude>
         </excludes>
      </resource>
   </resources>
   <plugins>
      <plugin>
         <artifactId>maven-war-plugin</artifactId>
         <configuration>
            <webappDirectory>${basedir}/web</webappDirectory>
            <warSourceDirectory>${basedir}/web</warSourceDirectory>
         </configuration>
      </plugin>
      <plugin>
         <artifactId>maven-compiler-plugin</artifactId>
         <configuration>
            <source>1.8</source>
            <target>1.8</target>
         </configuration>
      </plugin>
      <plugin>
         <groupId>org.apache.maven.plugins</groupId>
         <artifactId>maven-compiler-plugin</artifactId>
         <configuration>
            <source>8</source>
            <target>8</target>
         </configuration>
      </plugin>
   </plugins>
</build>

Creating a run configuration

Whilst my intention is to add a Tomcat embedded server, we can create a new run configuration at this point to test everything worked.

Select the Run, Edit Configuration option from the toolbar or Run | Edit Configuration and click + and add a Tomcat Server | Local.

Mine’s set with the URL http://localhost:8080/. Give the configuration a name and don’t forget to select the Deployment tab, press the + and then click Aritifact… I selected MyWebApp:war.

Press OK and OK again to finish the configuration and now you can run Tomcat locally and deploy the war.

Don’t forget to execute mvn install to build you war file.

It’s been a long time JDBC…

As I’m working in Java again. I’m having to reacquaint myself with various Java lanaguge features and libraries.

It’s been a long time since I’ve had to use JDBC, but here’s a snippet of code to demonstrate a accessing an Oracle LDAP datasource.

// import java.sql.*;

Properties connectionProps = new Properties();
connectionProps.put("user", "YOUR_USER_NAME");
connectionProps.put("password", "YOUR_PASSWORD");

Connection conn = DriverManager.getConnection(
   "jdbc:oracle:thin:@ldap://SOME_URL:SOME_PORT/,cn=OracleContext,dc=putridparrot,dc=com",
                    connectionProps);

Statement statement = conn.createStatement();
ResultSet rs = statement.executeQuery("select id, blob from SOME_TABLE");

while(rs.next()) {
   String id = rs.getString(1);
   String blob = rs.getString(2);
   // do something with the results
}

Writing a custom JUnit test runner

We’re going to create a new JUnit test runner which will be a minimal runner, i.e. has the barest essentials to run some tests. It should be noted that JUnit includes an abstract class ParentRunner which actually gives us a better starting point, but I wanted to demonstrate a starting point for a test runner which might no adhere to the style used by JUnit.

Our test runner should extend the org.junit.runner.Runner and will contain two methods from the abstract Runner class and a public constructor is required which takes the single argument of type Class, here’s the code

import org.junit.runner.Description;
import org.junit.runner.Runner;
import org.junit.runner.notification.RunNotifier;

public class MinimalRunner extends Runner {

    public MinimalRunner(Class testClass) {
    }

    public Description getDescription() {
        return null;
    }

    public void run(RunNotifier runNotifier) {
    }
}

we’ll also need to add the dependency

<dependency>
   <groupId>junit</groupId>
   <artifactId>junit</artifactId>
   <version>4.12</version>
</dependency>

Before we move onto developing this into something more useful, to use our test runner on a Test class, we need to add the RunWith annotation to the class declaration, for example

import org.junit.runner.RunWith;

@RunWith(MinimalRunner.class)
public class MyTest {
}

Okay, back to the test runner. The getDescription method should return a description which ultimately makes up the tree we’d see when running our unit tests, so we’ll be wanting to return a parent/child relationship of descriptions where the parent is the test class name and it’s children are those methods marked with the Test annotation (we’ll assume children but no deeper, i.e. no grandchildren etc.).

Spoiler alert, we will be needing the Description objects again later so let’s cache them in readiness.

public class MinimalRunner extends Runner {

    private Class testClass;
    private HashMap<Method, Description>  methodDescriptions;

    public MinimalRunner(Class testClass) {
        this.testClass = testClass;
        methodDescriptions = new HashMap<>();
    }

    public Description getDescription() {
        Description description = 
           Description.createSuiteDescription(
              testClass.getName(), 
              testClass.getAnnotations());

        for(Method method : testClass.getMethods()) {
            Annotation annotation = 
               method.getAnnotation(Test.class);
            if(annotation != null) {
                Description methodDescription =
                   Description.createTestDescription(
                      testClass,
                      method.getName(), 
                      annotation);
                description.addChild(methodDescription);

                methodDescriptions.put(method, methodDescription);
            }
        }

        return description;
    }

    public void run(RunNotifier runNotifier) {
    }
}

In the above code we create the parent (or suite) description first and then locate all methods with the @Test annotation and create test descriptions for them. These are added to the parent description and along with the Method, to our cached methodDescriptions.

Note: that we’ve not written code to handle @Before, @After or @Ignore annotations, just to keep things simple.

Obviously we’ll need to add the following imports also to the above code

import org.junit.Test;
import java.lang.annotation.Annotation;
import java.lang.reflect.Method;
import java.util.HashMap;
// also need these two for the next bit of code
import org.junit.AssumptionViolatedException;
import org.junit.runner.notification.Failure;

Next up we need to actually run the tests and as you’ve probably worked out, this is where the run method comes in. There’s nothing particularly special here, we’re just going to run on a single thread through each method. Had we been handling @Before and @After then these methods would be called prior to the code in the following code’s forEach loop (but we’re keep this simple).

public void run(RunNotifier runNotifier) {

   try {
      Object instance = testClass.newInstance();

      methodDescriptions.forEach((method, description) ->
      {
         try {
            runNotifier.fireTestStarted(description);

            method.invoke(instance);

            runNotifier.fireTestFinished(description);
         }
         catch(AssumptionViolatedException e) {
            Failure failure = new Failure(description, e.getCause());
            runNotifier.fireTestAssumptionFailed(failure);
         }
         catch(Throwable e) {
            Failure failure = new Failure(description, e.getCause());
            runNotifier.fireTestFailure(failure);
         }
         finally {
            runNotifier.fireTestFinished(description);
         }
      });
   }
   catch(Exception e) {
      e.printStackTrace();
   }
}

In the code above we simply create an instance of the test class the loop through our previous cached methods invoking the @Test methods. The calls on the runNotifier object tell JUnit (and hence UI’s such as the IntelliJ test UI) which test has started running and whether it succeeded or failed. In the case of failure, the use of getCause() was added because otherwise (at least in my sample project) the exception showed information about the test runner code itself, which was superfluous to the actual test failure.

I’ve not added support for filtering or sortable capabilities within our code, to do this our MinimalRunner would also implement the Filterable interface for filtering and Sortable for sorting (within the org.junit.runner.manipulation package).

I’m not going to bother implementing this interface in this post as the IDE I use for Java (IntelliJ) handles this stuff for me anyway.

Code on GitHub

Code’s available on GitHub.

Using JMock

At some point we’re likely to require a mocking framework for our unit test code. Ofcourse there’s several java based frameworks. In this post I’m going to look into using JMock.

Setting up an example

Let’s start by looking at an interface, my old favourite a Calculator

public interface Calculator {
    double add(double a, double b);
}

Let’s now add some dependencies to the pom.xml

<dependencies>
   <dependency>
      <groupId>junit</groupId>
      <artifactId>junit</artifactId>
      <version>4.12</version>
      <scope>test</scope>
   </dependency>
   <dependency>
      <groupId>org.jmock</groupId>
      <artifactId>jmock-junit4</artifactId>
      <version>2.8.4</version>
      <scope>test</scope>
   </dependency>
</dependencies>

Using JMock

We need to start off by creating a Mockery object. This will be used to create the mocks as well as handle the three A’s, Arrange, Act and Assert. Let’s jump straight in an look at some code…

package com.putridparrot;

import org.jmock.Expectations;
import org.jmock.Mockery;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;

import static org.junit.Assert.assertEquals;

public class CalculatorTest {

    private Mockery mockery;

    @Before
    public void setUp() {
        mockery = new Mockery();
    }

    @After
    public void tearDown() {
        mockery.assertIsSatisfied();
    }

    @Test
    public void add() {

        final Calculator calc = mockery.mock(Calculator.class);

        mockery.checking(new Expectations()
        {
            {
                oneOf(calc).add(2.0, 4.0);
                will(returnValue(6.0));
            }
        });

        double result = calc.add(2, 4);
        assertEquals(6.0, result, 0);
    }
}

In the code above we’re creating the Mockery in the setup method and the assert in the teardown.

We then use the Mockery to create a mock of the interface Calculator which we would normally pass into some other class to use, but for simplicity I’ve simply demonstrated how we arrange the mock using mockery.checking along with the expectations. Then we act on the mock object by calling the add method which ofcourse will execute our arrange code.

Here’s an example of some code which demonstrates arranging values which will return a different value each time, so we’ll add the following to our Calculator interface

int random();

and now create the following test, which expects random to be called three times and arranges each return accordingly.

@Test
public void random() {
   final Calculator calc = mockery.mock(Calculator.class);

   mockery.checking(new Expectations()
   {
      {
         exactly(3)
            .of(calc)
            .random();

         will(onConsecutiveCalls(
            returnValue(4),
            returnValue(10),
            returnValue(42)
         ));
      }
   });

   assertEquals(4, calc.random());
   assertEquals(10, calc.random());
   assertEquals(42, calc.random());
}

Ofcourse there’s plenty more to JMock, but this should get you started.

Vert.x futures

In previous examples of implementations of AbstractVerticle classes I’ve used start and stop methods which take no arguments, there’s actually asynchronous versions of these methods which support the Vert.x Future class.

For example

public class FutureVerticle extends AbstractVerticle {
   @Override
   public void start(Future<Void> future) {
   }

   @Override
   public void stop(Future<Void> future) {
   }
}

Let’s take a look at how our start method might change to use futures.

@Override
public void start(Future<Void> future) {

   // routing and/or initialization code

   vertx.createHttpServer()
      .requestHandler(router::accept)
      .listen(port, l ->
      {
         if(l.succeeded()) {
            future.succeeded();
         }
         else {
            future.fail(l.cause());
         }
      });
)

In this example we simply set the state of the future to success or failure and in the case of the failure supply a Throwable as the argument to the fail method.

Using the Future in our own code

Obviously the Future class may be used outside of the start and stop methods, so let’s take a look at creating and using a Future.

To create a future simply use

Future<Record> f = Future.future();

in this case we’re creating a Future which takes a Record. We can now supply our own AsyncResult handler to handle the future on completion, i.e.

Future<Record> f = Future.future();

f.setHandler(ar ->
{
   if(r.succeeded() {
      // do something with result
   }
});

Many of the Vertx methods (like listen in the earlier code) supply overloads with an AsyncResult callback. We can pass a future as a callback using the method completer and supply a handler via the future. For example

Future<HttpServer> f = Future.future();
f.setHandler(l ->
{
   if(l.succeeded()) {
      future.succeeded();
   }
   else {
      future.fail(l.cause());
   }
});

vertx.createHttpServer()
   .requestHandler(router::accept)
   .listen(port, f.completer());

Service Discovery using ZooKeeper with Vert.x

Following on from my post Service discovery with Vert.x, let’s switch from using the Vert.x built-in Service Discovery to using ZooKeeper.

You can fire up a docker container running ZooKeeper using

docker run --name zookeeper --restart always -d zookeeper

Now we’ll need to add the following to our pom.xml

<dependency>
   <groupId>io.vertx</groupId>
   <artifactId>vertx-service-discovery-bridge-zookeeper</artifactId>
   <version>${vertx.version}</version>
</dependency>

Note: vertx.version is set to 3.5.1

The code to create the Service Discovery to ZooKeeper is, as follows

ServiceDiscovery discovery = ServiceDiscovery.create(vertx)
   .registerServiceImporter(new ZookeeperServiceImporter(),
      new JsonObject()
        .put("connection", "172.17.0.2:2181")
        .put("basePath", "/services/hello-service"));

Replace the ip address with the one create by Docker or if you’re running on localhost.

We register the ZooKeeperServiceImporter and supply at least the “connection”, the “basePath” is not required, a default will be supplied if none is explicitly supplied.

Don’t forget you’ll need the import

import io.vertx.servicediscovery.zookeeper.ZookeeperServiceImporter;

References

Docker ZooKeeper
Vert.x Service Discovery

Code

Source code relating to these posts on Vert.x can be found at VertxSamples. The code differs from the posts in that in some cases it’s been refactored to reduce some code duplication etc.

Configuration with Vert.x

Obviously you can use whatever configuration code you like to configure your Verticles and/or Vert.x applications, you might use standard configuration files or a Redis store or whatever.

Vert.x has a single library which allows us to abstract such things and offers different storage formats and code to access different storage mechanisms.

Add the following dependency to your pom.xml

<dependency>
   <groupId>io.vertx</groupId>
   <artifactId>vertx-config</artifactId>
   <version>${vertx.version}</version>
</dependency>

vertx.version in my version is 3.5.0

We can actually define multiple storage mechanisms for our configuration data and chain them. For example imagine we have some we have configuration on the localhost and other configuration on a remote HTTP server. We can set things up to get properties that are a combination of both locations.

Note: If location A has property port set to 8080 but location B (which appears after location A in the chain) has port set to 80 then the last property located in the chain is the one we’ll see when querying the configuration code for this property. For other properties the result is a combination of those from A and those from B.

We can also mark configuration sources as optional so, if they do not exist or are down, the chaining will not fail or exception.

Let’s start with a simple config.properties file stored in /src/main/resources. Here’s the file (nothing exciting here)

port=8080

We’ll now create the ConfigStoreOptions for a “properties” file, then add it to the store (we’ll just use a single store for this example) and finally we’ll retrieve the port property from the configuration retriever…

// create the options for a properties file store
ConfigStoreOptions propertyFile = new ConfigStoreOptions()
   .setType("file")
   .setFormat("properties")
   .setConfig(new JsonObject().put("path", "config.properties"));

// add the options to the chain
ConfigRetrieverOptions options = new ConfigRetrieverOptions()
   .addStore(propertyFile);
    // .addStore for other stores here

ConfigRetriever retriever = ConfigRetriever.create(vertx, options);
retriever.getConfig(ar ->
{
   if(ar.succeeded()) {
      JsonObject o  = ar.result();
      int port = o.getInteger("port");
      // use port config
   }
});

Note: Don’t forget to set the configuration store options “path” to the location and name of your file.

Service discovery with Vert.x

We may get to a point whereby we have multiple Vert.x applications running and we want one Verticle to communicate with another – this is easy enough if the IP address and port are fixed but not so easy in more scalable/real-world scenarios where we cannot guarantee these are fixed.

In such situations we can use service discovery to locate other services.

Before we get started with the code, we need to add the following to the pom.xml

<dependency>
   <groupId>io.vertx</groupId>
   <artifactId>vertx-service-discovery</artifactId>
   <version>${vertx.version}</version>
</dependency>

I’m using vertx.version 3.5.0 in my examples.

Publishing/Registering our Verticle with ServiceDiscovery

To register our Verticle with ServiceDisovery we create a Record object which tells the ServiceDiscovery how to access a Verticle, this includes it’s host/IP, port and service root along with a name for other code to use to locate the service. For example

Record record = HttpEndpoint.createRecord(
   "hello-service",
   "localhost",
   8080,
   "/hello");

So this basically says, create a Record named “hello-service” (the key or name of the service) and it’s IP/host is localhost, obviously this is just for my testing. Next we supply the exposed port and finally the root of the service.

We then publish this record to the ServiceDiscovery object like this

discovery.publish(record, ar ->
{
   if (ar.succeeded()) {
      // publication succeeded
      publishedRecord = ar.result();
   } else {
      // publication failed
   }
});

Upon success we store the Record (in this case we only do this if the call succeeded) so that we can unpublish the service if it’s shutdown.

Let’s look at the full code for a simplified HelloVerticle

public class HelloVerticle extends AbstractVerticle {

    private ServiceDiscovery discovery;
    private Record publishedRecord;

    @Override
    public void start() {
        discovery = new DiscoveryImpl(vertx, 
           new ServiceDiscoveryOptions());

        Router router = Router.router(vertx);
        router.get("/hello").handler(ctx -> {
            ctx.response()
                .putHeader("content-type", "text/plain")
                .end("hello");
        });

        Record record = HttpEndpoint.createRecord(
                "hello-service",
                "localhost",
                8080,
                "/hello");

        discovery.publish(record, ar ->
        {
            if (ar.succeeded()) {
                // publication success
                publishedRecord = ar.result();
            } else {
                // publication failure
            }
        });

        vertx
           .createHttpServer()
           .requestHandler(router::accept)
           .listen(8080, ar -> {
              // handle success/failure 
           });
    }

    @Override
    public void stop() {
        if(discovery != null) {
            discovery.unpublish(publishedRecord.getRegistration(), ar ->
            {
                if (ar.succeeded()) {
                    // Success
                } else {
                    // cannot unpublish the service, 
                    // may have already been removed, 
                    // or the record is not published
                }
            });

            discovery.close();
        }
    }
}

Locating a service via ServiceDiscovery

Let’s take a look at some “consumer” code which will use service discovery to locate our “HelloVerticle”. As expected we need to create access to the ServiceDiscovery object and then we try to locate the Record for a previously added Record.

In the example, below, we search for the “name”, “hello-service”, this is wrapped into a JsonObject and the result (if successful will contain a Record which matches the search criteria. Using HttpClient we can now simply get the reference to this service and interact with it without ever knowing it’s IP address or port.

ServiceDiscovery discovery = ServiceDiscovery.create(v);
discovery.getRecord(
   new JsonObject().put("name", "hello-service"), found -> {
   if(found.succeeded()) {
      Record match = found.result();
      ServiceReference reference = discovery.getReference(match);
      HttpClient client = reference.get();

      client.getNow("/hello", response ->
         response.bodyHandler(
            body -> 
               System.out.println(body.toString())));
   }
});

HttpClient in Vert.x

Vert.x includes an HttpClient and associated code for interacting with HTTP protocols, obviously this can be used to write client applications or in situations where we might use a reference from service discovery.

This is a very short post which is just mean’t to demonstrate the client capability which will be used in the next post (Service discovery with Vert.x).

HttpClient client = vertx.createHttpClient();
client.getNow(8080, "localhost", "/hello", response ->
{
   response.bodyHandler(
      body -> 
         System.out.println(body.toString()));
});

The HttpClient gives us get, post, head etc. the Now postfixed named methods tend to be simpler syntax including the supply of callback and are composable using fluent style syntax.

See also Creating an HTTP client for more information.

Benchmarking my Java code using JUnitBenchmarks

As part of building some integration tests for my Java service code, I wanted to get some (micro-)benchmarks run against the tests. In C# we have the likes of NBenchmark (see my post Using NBench for performance testing), so it comes as no surprise to find libraries such as JUnitBenchmarks in Java.

Note: the JUnitBenchmarks site states it’s now deprecated in favour of using JMH, but I will cover it here anyway as it’s very simple to use and get started with and fits nicely in with existing JUnit code.

JUnitBenchmarks

First off we need to add the required dependency to our pom.xml, so add the following

<dependency>
   <groupId>com.carrotsearch</groupId>
   <artifactId>junit-benchmarks</artifactId>
   <version>0.7.2</version>
   <scope>test</scope>
</dependency>

JUnitBenchmarks, as the name suggests, integrates with JUnit. To enable our tests within the test runner we simply add a rule to the unit test, like this

public class SampleVerticleIntegrationTests {
    @Rule
    public TestRule benchmarkRule = new BenchmarkRule();

   // tests
}

This will report information on the test, like this

[measured 10 out of 15 rounds, threads: 1 (sequential)]
round: 1.26 [+- 1.04], round.block: 0.00 [+- 0.00], 
round.gc: 0.00 [+- 0.00], 
GC.calls: 5, GC.time: 0.19, 
time.total: 25.10, time.warmup: 0.00, 
time.bench: 25.10

The first line tells us that the test was actually executed 15 times (or rounds), but only 10 times was it “measured” the other 5 times were warm-ups all on a single thread – this is obviously the default for benchmarking, however what if we want to change these parameters…

If we want to be more specific about the benchmarking of various test methods we add the annotation @BenchmarkOptions, for example

@BenchmarkOptions(benchmarkRounds = 20, warmupRounds = 0)
@Test
public void testSave() {
   // our code
}

As can be seen, this is a standard test but the annotation tells JUnitBenchmarks will run the test 20 times (with no warm-up runs) and then report the benchmark information, for example

[measured 20 out of 20 rounds, threads: 1 (sequential)]
round: 1.21 [+- 0.97], round.block: 0.00 [+- 0.00], 
round.gc: 0.00 [+- 0.00], 
GC.calls: 4, GC.time: 0.22, 
time.total: 24.27, time.warmup: 0.00, 
time.bench: 24.27

As you can see the first line tells us the code was measured 20 times on a single thread with no warm-ups (as we specified).

I’m not going to cover build integration here, but checkout JUnitBenchmarks: Build Integration for such information.

What do the results actually mean?

I’ll pretty much recreate what’s on Class Result here.

Let’s look at these results…

[measured 20 out of 20 rounds, threads: 1 (sequential)]
round: 1.21 [+- 0.97], round.block: 0.00 [+- 0.00], 
round.gc: 0.00 [+- 0.00], 
GC.calls: 4, GC.time: 0.22, 
time.total: 24.27, time.warmup: 0.00, 
time.bench: 24.27

We’ve already seen that the first line tells us how many times the test was run, and how many of those runs were warm-ups. It also tells us how many threads were used in this benchmark.

round tells us the average round time in seconds (hence the example took 1.21 seconds with a stddev of +/- 0.97 seconds).
round.block tells us the average (and stddev) of blocked threads, in this example there’s no concurrency hence 0.00.
round.gc tells us the average and stddev of the round’s GC time.
GC.calls tells us the number of times GC was invoked (in this example 4 times).
GC.time tels us the accumulated time take invoking the GC (0.22 seconds in this example).
time.total tells us the total benchmark time which includes benchmarking and GC overhead.
time.warmup tells us the total warmup time which includes benchmarking and GC overhead.

Caveats

Apart from the obvious caveat that this library has been marked as deprecated (but I feel it’s still useful), when benchmarking you have to be aware that, the results may be dependent upon outside factors, such as memory available, maybe hard disk/SSD speed if tests include any file I/O, network latency etc. So such figures are best seen as approximates of performance etc.

Also there’s seems to be no way to “fail” a test, for example if the test exceeds a specified time or more GC’s than x are seen, so treat these more as informational.