Monthly Archives: June 2017

C++ lambda’s in a little more depth

Lambda expressions appeared in C++ 11.

Let’s take a simple example. We want to create a lambda which takes an enum (of type UpdateFlag) and returns a LPTSTR (yes we’re in Visual C++ world) which we’ll use when we writing to cout.

We can create the lambda like this

auto lambda = [](UpdateFlag flag)
{
   switch (flag)
   {
      case UpdateFlag::Update:
         return _T("Update");
      case UpdateFlag::Delete:
         return _T("Delete");
      case UpdateFlag::Missing:
         return _T("Missing");
   }
};

if you prefer to not use the auto keyword we can use the functional code like this

std::function<LPCTSTR(UpdateFlag)> lambda = [](UpdateFlag flag)
{
// switch removed for brevity
};

I’m not quite sure where you’d need to use the std::function variation of this, but this is what it would look like.

Now to use our lambda, we’d simply have something like

std::cout << lamba(updateFlag);

We can also pass the lambda (without creating a variable to it) using the following

std::cout << [](UpdateFlag flag)
{
// switch removed for brevity
}(updateFlag);

Note: we need to pass the updateFlag in a parameter list after the lamba declaration.

The [] is an empty capture list, we could alter the previous samples (assuming a variable updateFlag is available within the scope of the lambda) by passing the updateFlag as part of the capture list, for example

auto updateFlag = UpdateFlag::Update;

auto lambda = [updateFlag]
{
   switch (updateFlag)
   {
      case UpdateFlag::Update:
         return _T("Update");
      case UpdateFlag::Delete:
         return _T("Delete");
      case UpdateFlag::Missing:
         return _T("Missing");
   }
};

Structuring the lamba

Let’s look at the format of the lamba…

The lamba syntax basically takes the form

  • [capture-list] (params) mutable(optional) constexpr(optional)(c++17) exception attribute -> ret { body }
  • [capture-list](params) -> ret { body }
  • [capture-list](params) { body }
  • [capture-list] { body }

The above is replicated from the Lambda expressions page.

The capture-list is a comma seperated list of zero or more captures, which takes or “captures” variables to be passed into the lambda.

The params are a standard list of parameters to pass into your lambda and the body is your actual code.

References

An excellent article on the subject of C++ 11 lambda’s can be found here.
The lambda specification can be found here.

Revisiting an old friend

Reminiscing

For many years (a long time ago in a galaxy far away) I was a C++ developer, but I haven’t done much in C++ for a while now, until recently…

I needed to revisit a C++ application I wrote (not quite so long ago) to make a few additions and at the same time I thought it worth reviewing the changes that have taken place in C++ in recent years. This post is primarily looking at some of the changes as part of C++ 11 that I’ve come across whilst updating my app. and whilst not comprehensive, should give a taste for “Modern C++”.

auto is to C++ what var is to C#

auto i = 1234;

This is resolved to the type at compile time (i.e. it’s not dynamic). The compiler simply resolves the type based upon the value on the right hand side of the assignment operator.

decltype

template<typename A, typename B>
auto f(A a, B b) -> decltype(a + b)
{
	return a + b;
}

We can also use it in a sort of dynamic typedef manner

decltype(123) v = 1;
// equivalent to 
int v = 1;

decltpye(123) declares the type as an int (as the type is taken from the argument passed in) then the value 1 is assigned to the variable.

for (each) loops known as range based loops in C++

The for loop (in a similar way to Java uses it) can be used as a foreach style loop. Although C++ calls this a range based loop. It only works on arrays or objects which have begin and end member functions (or initializer lists)

for(auto &it : myvector)
{   
}

// using initializer list

for(auto i : { 1, 2, 3})
{
}

nullptr takes over from NULL

The nullptr is a strongly typed keyword which can replace those NULL or 0 usages when setting or testing a variable to see whether it points at something or not.

if(errors == nullptr) 
{
}

Another benefit of the keyword being strongly typed is that in situations when we might have two methods (with the name overloaded) one taking and int, the second a pointer type, then the compiler can now correctly choose the pointer overload, for example

void f(int i)
{
}

void f(int* i)
{
}

f(NULL);    // calls f(int i)
f(nullptr); // calls f(int* i)

So obviously the use of the NULL suggests the developer wants to use f(int*), but this is ambiguous and both could be correct. In the case of Visual Studio f(int) is called when we use NULL.

Lamda’s as well

I’ve written a longer post on lambda’s which will be published after this post. However, here’s a taster showing a lamda used within the algorithm header’s std::for_each function

std::for_each(v.begin(), v.end(), [](int i)
{
   std::cout << i;
});

In this example, v is a vector of ints.

The lambda is denoted with the [] which captures variables from the calling code (in this case nothing is captured) and the rest is probably self-explanatory, we’ve created a lambda which takes and int and uses cout to write the value to the console.

Initializer lists

Like C# we can create lists using { } syntax, for example

std::array<int, 3> a = { 1, 2, 3 };
// or using a vector
std::vector<int> v = { 1, 2, 3 };

Sadly, Visual Studio 2015 doesn’t appear to fully support this syntax and allow us to easily intialize vectors etc. but CLion does. But thankfully Visual Studio 2017 does support this fully.

Note: Visual Studio 2015 accepts using std::vector v = { 3, (1,2,3) }; where the first number is the size of the vector.

Non-static member initializers

class FileData
{
public:
   long a {0};
   long b {1};
}

Upon creation of a FileData instance, a is initialized to 0 and b is initialized to 1.

Tuples

We can now create tuples by include the tuple header and creating a tuple using the std::make_tuple function, i.e.

auto t = std::make_tuple<int, std::string>(123, "Hello");

To get at each part of the tuple we can use either the get function or the tie function, like this

std::cout 
   << std::get<0>(t) 
   << std::get<1>(t) 
   << std::endl;

Note: the index goes in the template argument.

Or using tie

int a;
std::string b;

std::tie(a, b) = t;

std::cout 
   << a 
   << b 
   << std::endl;

std::thread, yes threads made easier

Okay, we’re not talking Task or async/await capabilities like C# but the std::thread class makes C++ threading that bit easier.

#include <thread>
#include <iostream>

void f(std::string s, int n) {
   for (int i = 0; i < n; i++)
      std::cout << s.c_str() << std::endl;
}
int main()
{
   std::thread thread1(f, "Hello", 10);
   std::thread thread2(f, "World", 3);
	
   thread1.join();
   thread2.join();

    return 0;
}

In this code we create a thread passing in the function and the arguments for the function, using join to wait until both threads have completed.

References

Modern C+ Features
Support For C++11/14/17 Features (Modern C++)

Anonymous fields in Go

Anonymous fields in Go structures allow us to shortcut dot notation as well as allow us to use compositions to add methods to a type.

Shortcutting the dot notation

For example if we have something like this

type A struct {
	X int
	Y int
}

type B struct {
	Z int
}

type C struct {
	AValue A
	BValue B
}

then to access the field X on AValue we’d write code like this

c := C {}
c.AValue.X = 1

Changing the C struct fields to anonymous (i.e. removing the field name) like this

type C struct {
	A
	B
}

allows us to reduce the use of the dot notation and access X on A as if it was a field within the C struct itself, i.e.

c := C {}
c.X = 1

Okay, but what if B struct now replaced the Z field name with X, i.e.

type A struct {
	X int
	Y int
}

type B struct {
	X int
}

We now have a situation where both A and B have the same field name, well we can still use anonymous fields, but we’re back to using more dot notation again to get around the obvious field name conflict, i.e.

c := C {}
c.A.X = 1
c.B.X = 2

Initializers do not support shortcutting

Unfortunately we not get the advantage of the anonymous fields with the initializer syntax, i.e. We must specify the A and B structs like this

c := C{ A : A{2, 4}, B : B { 12 }}

// or

c := C{ A : A{X : 2, Y : 4}, B : B { Z: 12 }}

// but this doesn't compile
c := C{ X : 2, Y : 4, Z : 12 }
// nor does this compile
c := C{ A.X : 2, A.Y = 3, B.Z : 12 }

Composition

More powerful than the synaptic sugar of embedded/anonymous fields, we can also use composition to increase the methods available on the struct C. For example let’s now add a method to the A type which allows us to move our X, Y point

func (a *A) Move(amount int) {
	a.X += amount
	a.Y += amount
}

Now we can call the method on the previously declared c variable like this

c.Move(10)

We can also create further methods via a composition pattern on type C if we want using empty types, let’s assume we have type D and extend the composition of type C

type D struct {}

type C struct {
	A
	B
	D
}

now we can add methods to D and ofcourse they’ll be available as part of C. So for example, maybe D acts as a receiver for methods that serialize data to JSON, now our C type will also appear to have those methods.

Extending the old WPF drag/drop behavior

A while back I wrote a post of creating A WPF drag/drop target behavior (well really it’s a drop behavior). Let’s extend this and add keyboard paste capabilities and tie it into a view model.

Adding keyboard capabilities

I’ll list the full source at the end of this post, for now I’ll just show changes from my original post.

In the behavior’s OnAttached method add

AssociatedObject.PreviewKeyDown += AssociatedObjectOnKeyDown;

in the OnDetaching method add

AssociatedObject.PreviewKeyDown -= AssociatedObjectOnKeyDown;

the AssociatedObjectOnKeyDown method looks like this

private void AssociatedObjectOnKeyDown(object sender, KeyEventArgs e)
{
   if ((e.Key == Key.V && 
      (Keyboard.Modifiers & ModifierKeys.Control) == ModifierKeys.Control) ||
         (e.Key == Key.V) && (Keyboard.IsKeyDown(Key.LeftCtrl) || 
            Keyboard.IsKeyDown(Key.RightCtrl)))
   {
      var data = Clipboard.GetDataObject();
      if (CanAccept(sender, data))
      {
         Drop(sender, data);
      }
   }
}

Don’t worry about CanAccept and Drop at the moment. As you can see, we capture the preview key down events and if Ctrl+V is being pressed whilst the AssociatedObject has focus, we get the data object from the clipboard, then we want to check if our view model accepts the data, i.e. if we only accept CSV we can fail the paste if somebody tries to drag and image into the view, otherwise we call Drop, which our old has been refactored to also use.

Both the CanAccept and Drop methods need to call into the view model for it to decide whether to accept the data and upon accepting, how to use it, so first we need to define an interface our view model can implement which allows the behavior to call into it, here’s the IDropTarget

public interface IDropTarget
{
   bool CanAccept(object source, IDataObject data);
   void Drop(object source, IDataObject data);
}

Fairly obvious how this is going to work, the behavior will decode the clipbaord/drop event to an IDataObject. The source argument is for situations where we might be dragging from a listbox (for example) to another listbox and want access to the view model behind the drag source.

If we take a look at both the CanAccept method and Drop method on the behavior

private bool CanAccept(object sender, IDataObject data)
{
   var element = sender as FrameworkElement;
   if (element != null && element.DataContext != null)
   {
      var dropTarget = element.DataContext as IDropTarget;
      if (dropTarget != null)
      {
         if (dropTarget.CanAccept(data.GetData("DragSource"), data))
         {
            return true;
         }
      }
   }
   return false;
}

private void Drop(object sender, IDataObject data)
{
   var element = sender as FrameworkElement;
   if (element != null && element.DataContext != null)
   {
      var target = element.DataContext as IDropTarget;
      if (target != null)
      {
         target.Drop(data.GetData("DragSource"), data);
      }
   }
}

As you can see, in both cases we try to get the DataContext of the framework element that sent the event and if it is an IDropTarget we hand off CanAccept and Drop to it.

What’s the view model look like

So a simple view model (which just supplies a property Items of type ObservableCollection) is implemented below

public class SampleViewModel : IDropTarget
{
   public SampleViewModel()
   {
      Items = new ObservableCollection<string>();
   }

   bool IDropTarget.CanAccept(object source, IDataObject data)
   {
      return data?.GetData(DataFormats.CommaSeparatedValue) != null;
   }

   void IDropTarget.Drop(object source, IDataObject data)
   {
       var s = data?.GetData(DataFormats.CommaSeparatedValue) as string;
       if (s != null)
       {
           var split = s.Split(
              new [] { ',', '\r', '\n' }, 
                 StringSplitOptions.RemoveEmptyEntries);
           foreach (var item in split)
           {
              if (!String.IsNullOrEmpty(item))
              {
                 Items.Add(item);
              }
           }
       }
    }

    public ObservableCollection<string> Items { get; private set; }
}

In the above we only accept CSV data, the drop method is very simple and just splits the string into separate parts, each of which is then added to the Items collection.

our XAML (using a Listbox for demo) looks like this

<ListBox ItemsSource="{Binding Items}" x:Name="List">
   <i:Interaction.Behaviors>
      <local:UIElementDropBehavior />
   </i:Interaction.Behaviors>
</ListBox>

Note: the x:Name is here because in MainWindow.xaml.cs (hosting this control) we needed to force focus onto the listbox at startup. Otherwise the control, when empty doesn’t seem to get focus for the keyboard events. Ocourse we might look to use a Focus Behavior

The UIElementDropBehavior in full

public class UIElementDropBehavior : Behavior<UIElement>
{
    private AdornerManager _adornerManager;

    protected override void OnAttached()
    {
        base.OnAttached();

        AssociatedObject.AllowDrop = true;
        AssociatedObject.DragEnter += AssociatedObject_DragEnter;
        AssociatedObject.DragOver += AssociatedObject_DragOver;
        AssociatedObject.DragLeave += AssociatedObject_DragLeave;
        AssociatedObject.Drop += AssociatedObject_Drop;
        AssociatedObject.PreviewKeyDown += AssociatedObjectOnKeyDown;
    }

    protected override void OnDetaching()
    {
        base.OnDetaching();

        AssociatedObject.AllowDrop = false;
        AssociatedObject.DragEnter -= AssociatedObject_DragEnter;
        AssociatedObject.DragOver -= AssociatedObject_DragOver;
        AssociatedObject.DragLeave -= AssociatedObject_DragLeave;
        AssociatedObject.Drop -= AssociatedObject_Drop;
        AssociatedObject.PreviewKeyDown -= AssociatedObjectOnKeyDown;
    }

    private void AssociatedObjectOnKeyDown(object sender, KeyEventArgs e)
    {
        if ((e.Key == Key.V && (Keyboard.Modifiers & ModifierKeys.Control) == ModifierKeys.Control) ||
            (e.Key == Key.V) && (Keyboard.IsKeyDown(Key.LeftCtrl) || Keyboard.IsKeyDown(Key.RightCtrl)))
        {
            var data = Clipboard.GetDataObject();
            if (CanAccept(sender, data))
            {
                Drop(sender, data);
            }
        }
    }

    private void AssociatedObject_Drop(object sender, DragEventArgs e)
    {
        if (CanAccept(sender, e.Data))
        {
            Drop(sender, e.Data);
        }

        if (_adornerManager != null)
        {
            _adornerManager.Remove();
        }
        e.Handled = true;
    }

    private void AssociatedObject_DragLeave(object sender, DragEventArgs e)
    {
        if (_adornerManager != null)
        {
            var inputElement = sender as IInputElement;
            if (inputElement != null)
            {
                var pt = e.GetPosition(inputElement);

                var element = sender as UIElement;
                if (element != null)
                {
                    if (!pt.Within(element.RenderSize) || e.KeyStates == DragDropKeyStates.None)
                    {
                        _adornerManager.Remove();
                    }
                }
            }
        }
        e.Handled = true;
    }

    private void AssociatedObject_DragOver(object sender, DragEventArgs e)
    {
        if (CanAccept(sender, e.Data))
        {
            e.Effects = DragDropEffects.Copy;

            if (_adornerManager != null)
            {
                var element = sender as UIElement;
                if (element != null)
                {
                    _adornerManager.Update(element);
                }
            }
        }
        else
        {
            e.Effects = DragDropEffects.None;
        }
        e.Handled = true;
    }

    private void AssociatedObject_DragEnter(object sender, DragEventArgs e)
    {
        if (_adornerManager == null)
        {
            var element = sender as UIElement;
            if (element != null)
            {
                _adornerManager = new AdornerManager(AdornerLayer.GetAdornerLayer(element), adornedElement => new UIElementDropAdorner(adornedElement));
            }
        }
        e.Handled = true;
    }

    private bool CanAccept(object sender, IDataObject data)
    {
        var element = sender as FrameworkElement;
        if (element != null && element.DataContext != null)
        {
            var dropTarget = element.DataContext as IDropTarget;
            if (dropTarget != null)
            {
                if (dropTarget.CanAccept(data.GetData("DragSource"), data))
                {
                    return true;
                }
            }
        }
        return false;
    }

    private void Drop(object sender, IDataObject data)
    {
        var element = sender as FrameworkElement;
        if (element != null && element.DataContext != null)
        {
            var target = element.DataContext as IDropTarget;
            if (target != null)
            {
                target.Drop(data.GetData("DragSource"), data);
            }
        }
    }
}

Sample Code

DragAndDropBehaviorWithPaste

REST services with Mux and Go

Mux is a “powerful URL router and dispatcher” for Go and that means it allows us to route REST style requests to produce REST/micro services.

Creating our service

Let’s start out by creating a very simple EchoService which will contain our service implementation.

In my code based I create a service folder and add a service.go file, here’s the code

Note: This creation of a new go file, interface etc. is a little over the top to demonstrate mux routes/handlers, but my OO roots are probably showing through. In reality we just need a simple function with an expected signature – which we’ll see later.

package service

type IEchoService interface {
	Echo(string) string
}

type EchoService struct {

}

func (EchoService) Echo(value string) (string, error) {
	if value == "" {
		// create an error
		return "", nil
	}
	return value, nil
}

We’ve created an interface to define our service and then a simple implementation which can return a string and an error. For now we’ll not use the error, hence return nil for it.

Before we move onto the Mux code which will allow us to route requests/responses, let’s create a bare bones main.go file

package main

import (
   "goecho/service"
)

func main() {
   svc := service.EchoService{}
}

At this point this code will not compile because we haven’t used svc.

Implementing our router

Before we get started we need to run

go get -u github.com/gorilla/mux

from the shell/command prompt.

Now let’s add the code to create the mux router and to create a server on port 8080 that will be used to connect to our router (and service). We’ll also include code to log any fatal errors (again this code will not compile as the previously created svc variable remains unused at this point)

package main

import (
   "goecho/service"
   "github.com/gorilla/mux"
   "log"
   "net/http"
)

func main() {

   svc := service.EchoService{}

   router := mux.NewRouter()

   // service setup goes here

   log.Fatal(http.ListenAndServe(":8080", router))
}

I think this code is pretty self-explanatory, so we’ll move straight on to the implementation of our route and handler.

I’m going to add the handler to the service package but this handler needn’t be a method on the EchoService and could just be a function in the main.go (as mentioned previously).

You’ll need to add the following imports

import (
   "net/http"
   "github.com/gorilla/mux"
)

and then add this method, which is the handler and will call into our EchoService method.

func (e EchoService) EchoHandler(w http.ResponseWriter, r *http.Request) {
   vars := mux.Vars(r)

   result, _ := e.Echo(vars["s"])

   w.WriteHeader(http.StatusOK)
   w.Write([]byte(result))
}

To allow us to call the EchoService Echo method we declare a variable e for the EchoService. The arguments, of types ResponseWriter and Request are required to decode a request and to allow us to write a response. In this example the mux.Vars will be used to get us part of the rest command/URL.

Again, we’re not bothering (at this point to worry about the errors, so result, _ is used to ignore the error.

Next we write a Status OK code back and write the result back as a byte array.

Obviously we now need to set up our handler in main.go, so replace the line

// service setup goes here

with

router.HandleFunc("/echo/{s}", svc.EchoHandler).Methods("GET")

this simply creates the route for calls onto http://<hostname>/echo/??? (where ??? is any value which gets mapped to the mux.Vars[“s”]) through to the supplied handler (svc.EchoHandler) using the GET method.

For example, navigating to http://localhost:8080/echo/HelloWorld in your preferred web browser should display HelloWorld.

We can add multiple routes/handlers, for example let’s create a handler to respond with “Welcome to the EchoService” if the user navigates to http://localhost:8080. Place this function in main.go

func WelcomeHandler(w http.ResponseWriter, r *http.Request) {
   w.WriteHeader(http.StatusOK)
   w.Write([]byte("Welcome to the EchoService"))
}

and add this handler code before (or after) your existing handler code in main

router.HandleFunc("/", WelcomeHandler).Methods("GET")

NuGet and proxy servers

What you need to configure to use NuGet with a proxy server.

nuget.exe config -set http_proxy=http://my.proxy.address:port
nuget.exe config -set http_proxy.user=mydomain\myUserName
nuget.exe config -set http_proxy.password=myPassword

See http://stackoverflow.com/questions/9232160/nuget-behind-proxy for more information on this.

Creating Java classes from an XML schema using Maven

I my previous post Creating Java classes from an XML schema I used xjc to generate the Java classes for a small project.

Ultimately I try to automate my tasks as much as possible and also don’t want to have to write a document with lots of steps when passing code onto others to use. So I decided it was time to get Maven to (as part of it’s process) automatically generate the classed for me using xjc.

So now when passing my app. to somebody else to take on, I just say run mvn install and it’ll do everything else for you.

To get this to work we need to use the xjc/jaxb-2 plugin.

In your pom.xml, add the following

<build>
   <pluginManagement>
      <plugins>
         <plugin>
           <groupId>org.apache.maven.plugins</groupId>
           <artifactId>maven-compiler-plugin</artifactId>
           <configuration>
              <source>1.7</source>
              <target>1.7</target>
           </configuration>
         </plugin>
     </plugins>
   </pluginManagement>
   <plugins>
      <plugin>
         <groupId>org.codehaus.mojo</groupId>
         <artifactId>jaxb2-maven-plugin</artifactId>
         <version>1.6</version>
         <executions>
            <execution>
               <id>request-xsd</id>
               <goals>
                  <goal>xjc</goal>
               </goals>
               <configuration>
                  <packageName>com.putridparrot.request</packageName>
                  <schemaDirectory>src/main/resources/xsd/request-xsd</schemaDirectory>
                  <clearOutputDir>false</clearOutputDir>
               </configuration>
            </execution>
            <execution>
               <id>request-xsd</id>
               <goals>
                  <goal>xjc</goal>
               </goals>
               <configuration>
                  <packageName>com.putridparrot.response</packageName>
                  <schemaDirectory>src/main/resources/xsd/response-xsd</schemaDirectory>
                  <clearOutputDir>false</clearOutputDir>
               </configuration>
            </execution>
         </executions>
      </plugin>
   </plugins>
</build>

In the above example, I actually have a request xsd and a response xsd for a webservice call. The problem (and hence why I have two separate execution sections) is that they have duplicate type names, which meant if I had them both in the same folder would end up with conflicts and an error for the class generation.

Note: The folder xsd can be named anything you like and contain the XSD’s themselves or contain folders, as I’m using.

The jaxb2-maven-plugin does all the work for us, it executes xjc, creates the package for out resultant classes (i.e. com.putridparrot.request and com.putridparrot.response). It generates the classes from the schema in the schemaDirectory (i.e. all XSD’s within that directory) and it’s doesn’t clear the output folder. This is not important if you are not outputting to the source folder or don’t want the output folder cleared when maybe not regenerating everything within it, but it’s very useful to stop you accidentally deleting all your source (as I did whilst outputting to the src/main/java folder initially!).

If you do not specify and output directory then the classes will be generated into your target folder, within target/generated-sources/jaxb/<you package names>.

These files are then compiled into your application and available to use in your source.

Spring configuration for a JSON/XML REST service in Java

In Creating a CXF service that responds with JSON or XML and Creating a CXF client which can get JSON or XML we created a service and a client application for interacting with REST requests using XML or JSON as the body for the messages.

The Java guys love spring and beans, my code was all in Java source. So I’m going to show how to convert both the server and client into spring configurations.

In your client/server code add a resources folder to /src/main and within that add a new xml file, named whatever you like, mine’s spring-client.xml and spring-server.xml (for the client and server respectively).

Changes to the server

Let’s concentrate on the server first – our code looked like this

JAXRSServerFactoryBean factoryBean = new JAXRSServerFactoryBean();
factoryBean.setResourceClasses(SampleServiceImpl.class);
factoryBean.setResourceProvider(new SingletonResourceProvider(new SampleServiceImpl()));

Map<Object, Object> extensionMappings = new HashMap<Object, Object>();
extensionMappings.put("xml", MediaType.APPLICATION_XML);
extensionMappings.put("json", MediaType.APPLICATION_JSON);
factoryBean.setExtensionMappings(extensionMappings);

List<Object> providers = new ArrayList<Object>();
providers.add(new JAXBElementProvider());
providers.add(new JacksonJsonProvider());
factoryBean.setProviders(providers);

factoryBean.setAddress("http://localhost:9000/");
Server server = factoryBean.create();

The first thing we need to do is update our pom.xml to import spring, so add the following to the properties section

<spring.version>2.5.6</spring.version>

and then these dependencies

<dependency>
   <groupId>org.springframework</groupId>
   <artifactId>spring-context</artifactId>
   <version>${spring.version}</version>
</dependency>

<dependency>
   <groupId>org.springframework</groupId>
   <artifactId>spring-core</artifactId>
   <version>${spring.version}</version>
</dependency>

Now change the Java code in our main method, replacing all the code which is listed above, to

ApplicationContext context = new ClassPathXmlApplicationContext("spring-server.xml");

Now we need to add the following to our spring-server.xml file

<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:jaxrs="http://cxf.apache.org/jaxrs"
       xsi:schemaLocation="
         http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
         http://cxf.apache.org/jaxrs http://cxf.apache.org/schemas/jaxrs.xsd">

    <jaxrs:server address="http://localhost:9000">
        <jaxrs:serviceBeans>
            <bean class="SampleServiceImpl" />
        </jaxrs:serviceBeans>
        <jaxrs:extensionMappings>
            <entry key="json" value="application/json"/>
            <entry key="xml" value="application/xml"/>
        </jaxrs:extensionMappings>
        <jaxrs:providers>
            <bean class="org.apache.cxf.jaxrs.provider.JAXBElementProvider" />
            <bean class="org.codehaus.jackson.jaxrs.JacksonJsonProvider" />
        </jaxrs:providers>
    </jaxrs:server>
</beans>

We’re using the jaxrs configuration to create our service beans which means the server is automatically created for us.

That’s it for the server.

Changes to the client

Let’s look at the existing client code

List<Object> providers = new ArrayList<Object>();
providers.add(new JAXBElementProvider());
providers.add(new JacksonJsonProvider());

SampleService service = JAXRSClientFactory.create(
   "http://localhost:9000", SampleService.class, providers);

WebClient.client(service)
   .type(MediaType.APPLICATION_JSON_TYPE)
   .accept(MediaType.APPLICATION_JSON_TYPE);

First off, we need to add spring to the pom.xml as we needed for the server, so duplicate the steps for updating the pom.xml as already outlined.

Next we replace our client code (above) with the following

ApplicationContext context = new ClassPathXmlApplicationContext("spring-client.xml");
SampleService service = (SampleService)context.getBean("client");

and finally put the following into the spring-client.xml file

<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:jaxrs="http://cxf.apache.org/jaxrs"
       xsi:schemaLocation="
         http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
         http://cxf.apache.org/jaxrs http://cxf.apache.org/schemas/jaxrs.xsd">

    <jaxrs:client id="client" address="http://localhost:9000" serviceClass="SampleService">
        <jaxrs:providers>
            <bean class="org.apache.cxf.jaxrs.provider.JAXBElementProvider" />
            <bean class="org.codehaus.jackson.jaxrs.JacksonJsonProvider" />
        </jaxrs:providers>
        <jaxrs:headers>
            <entry key="Accept" value="application/json" />
        </jaxrs:headers>
    </jaxrs:client>
</beans>

and that’s it – run the server, then run the client and all should work.

Note: I’ll leave it to the reader to add the relevant import statements

Benchmarking code in Go

To create a unit test in Go<, we simple create a function with Test as the first part of the name, see my previous post on Unit testing in Go.

We can also write performance/benchmarking test code by creating a function with Benchmark as the first part of the function name, i.e. BenchmarkDataRead and it takes on the same format as a unit test, for example

func BenchmarkEcho(b *testing.B) {
	expected := "Hello"
	actual := test.Echo("Hello")

	if actual == expected {
		b.Error("Test failed")
	}
}

Our benchmark is passed a testing.B type which gives us functionality as per testing.T, in that we can fail a benchmark test etc. Obviously, for a benchmarking type, we also have the ability to start and stop the timer, for example if we have initialization and cleanup code we might want to exclude these from the benchmark by just wrapping the key code in a b.StartTimer and b.StopTimer section, i.e.

func BenchmarkEcho(b *testing.B) {
	expected := "Hello"

	b.StartTimer()
	actual := test.Echo("Hello")
	b.StopTimer()

	if actual != expected {
		b.Error("Test failed")
	}
}

Unit tests in Go

In the previous post I mentioned that the Go SDK often has unit tests alongside the package code. So what do we need to do to write unit tests in Go.

Let’s assume we have the package (from the previous post)

package test

func Echo(s string) string {
	return s
}

assuming the previous code is in file test.go and we then create a new file in the same package/folder named test_test.go (I know the name’s not great).

Let’s look at the code within this file

package test_test

import "testing"
import "Test1/test"

func TestEcho(t *testing.T) {
	expected := "Hello"
	actual := test.Echo("Hello")

	if actual != expected {
		t.Error("Test failed")
	}
}

So Go’s unit testing functionality comes in the package “testing” and our tests must start with the word Test and takes a pointer to type T. T gives us the methods to create failures etc.

In Gogland you can select the test_test.go file, right mouse click and you’ll see the Run, Debug and Run with coverage options.