Currying and Partial applications in F#

Note: I’m going through draft posts that go back to 2014 and publishing where they still may have value. They may not be 100% upto date but better published late than never.

Currying

Currying leads to the ability to create partial applications.

Currying is the process of taking a function with more than one arguments and turning it into single argument functions, for example

let add a b = a + b

// becomes

let add a = 
    let add' b = 
        a + b
    add'

This results in a function syntax which looks like this

val add : a:int -> (int -> int)

The bracketed int -> int shows the function which takes an int and returns and int.

Partial Applications

A partial application is a way of creating functions which have some of their arguments supplied and thus creating new functions. For example in it’s simplest form we might have

let add a b = a + b

let partialAdd a = add a 42

So, nothing too exciting there but what we can also do is, if we supply the first n arguments as per the following example

let add a b = a + b

let partialAdd = add 42

notice how we’ve removed the arguments to partialAdd but we can call the function thus

partialAdd 10 
|> printfn "%d"

we’re still supplying an argument because the function being called (the add function) requires an extra argument. The partialAdd declaration creates a new function which is partially implemented.

By declaring functions so that the last element(s) are to be supplied by a calling function we can better combine and/reuse functions.

Structs and Classes in Swift

I’ve previously posted about structs and classes in Swift, let’s look a little more in depth into the subject

Differences between a struct and class

As you may know, structs use value semantics whilst classes use reference semantics. In other words if you do something like this

struct Point {
   var x: Double
   var y: Double
}

let pt1 = Point(x: 1, y: 10)
let pt2 = pt1

Your pt1.x, pt1.y values are copied to pt2.x, pt2.y. If you therefore alter p1 this does no affect p2. Swift using COW (copy on write) so this is more performant than one might think if you were making lots of copies – in other words the copying takes place only when a value is changed – so thing lazy copying of values.

Structs are stored on the stack and hence are very performant in terms of creation etc. Ofcourse if you’re making many changes to copies of a struct then things are less performant due to all the copying that needs to happen (even with COW).

Ofcourse, in a more complicated scenario, such as the Point also having a member variable which is a class, then these will be stored on the stack, but with references to the heap based classes. This is an issue for structs, if a struct contain multiple reference types it’ll incur reference counting for each reference type in the struct.

Classes on the other hand use reference semantics and therefore if we rewrite the about for a class. Then p2 is a reference to p1. Any changes to p1 will be reflected in p2

class Point {
   var x: Double
   var y: Double
}

let pt1 = Point(x: 1, y: 10)
let pt2 = pt1

Classes are stored on the heap which means that allocation will be les performant than a struct, but ofcourse if you’re passing around a reference to a class, this is more performant. However because a reference type copies the reference pointer it does ofcourse lend itself to issues around mutation of state and this becomes more of an issue when we add in concurrency.

The reference types use reference counting to handle auto-deletion (i.e. garbage collection). This does mean that a reference type will also require more space that a struct to track it’s reference counter. Also due to it using the heap, again potential concurrency issues means the OS must lock memory etc.

If you’re using String or the likes in a struct, you’re storing a reference type, hence this will incur reference counting overhead. If we’re using a String to ultimately represent known values, for example UP, DOWN, OUT_OF_SERVICE then not only will it be more type safe to use an enum, from a performance perspective this would be better, for example we can declare a status enum with a String type as a raw backing value like this

enum Status: String {
   case up = "UP"
   case down = "DOWN"
   case outOfService = "OUT_OF_SERVICE"
}

Late and Early binding polymorphism

I was taking part in an interview and was asked to explain “late and early binding polymorphism”.

Now I’ve programmed in C++, Java, C# and other OO languages for many years, but I’ve have never (that I can recall) heard of late or early binding polymorphism. Safe to say, I needed to find out more…

Early Binding Polymorphism

Early (also known as static) binding is when we override methods in subclasses, i.e. these are resolved at compile time.

So for example we have a Button which overrides the Click method and this is early or statically bound

public class Window
{
  public virtual void Click()
  {
     // handles a click on the window
  }
}

public class Button : Window
{
  public override void Click()
  {
    // handles a button click
  }
}

Late Binding Polymorphism

Late (also known as dynamic and runtime) binding is when we assign a derived type to it’s base type. So using the previous example code the following would cause late binding to take place, i.e. the virtual method Click is resolved at runtime.

Windows window = new Button();
window.Click();

Azure Functions, AWS Lambda Functions, Google Cloud Functions

Some companies, due to regulatory requirements, a desire to not get locked into one cloud vendor or the likes, look towards a multi-cloud strategy. With this in mind this post is the first of a few showing some of the same functionality (but with different names) across the top three cloud providers, Microsoft’s Azure, Amazon’s AWS and Google Cloud.

We’re going to start with the serverless technology known as Lambda Functions (in AWS, and I think they might have been the first), Azure Functions and the Google cloud equivalent Google Cloud Functions. Now, in truth the three may not be 100% compatible in terms of their API but they’re generally close enough to allow us to worry about the request and response only and keep the same code for the specific function. Ofcourse if your function uses DB’s or Queues, then you’re probably starting to get tied more to the vendor than the intention of this post.

I’ve already covered Azure Functions in the past, but let’s revisit, so we can compare and contrast the offerings.

I’m not too interested in the code we’re going to deploy, so we’ll stick with JavaScript for each provider and just write a simple echo service, i.e. we send in a value and it responds with the value preceded by “Echo: ” (we can look at more complex stuff in subsequent posts).

Note: We’re going to use the UI/Dashboard to create our functions in this post.

Azure Functions

From Azure’s Dashboard

  • Type Function App into the search box or select it from your Dashboard if it’s visible
  • From the Function App page click the Create Function App button
  • From the Create Function App screen
    • Select your subscription and resource group OR create a new resource group
    • Supply a Function app name. This is essentially our apps name, as the Function app can hold multiple functions. The name must be unique across Azure websites
    • Select Code. So we’re just going to code the functions in Azure not supply an image
    • Select a runtime stack, let’s choose Node.js
    • Select the version (I’m sticking with the default)
    • Select the region, look for the region closest to you
    • Select the Operating System, I’m going to leave this as the default Windows
    • I’ve left the Hosting to the default Consumption (Serverless)
  • Click Review + create
  • If you’re happy, now click Create

Once Azure has done it’s stuff, we’ll have a resource and associated resources created for our functions.

  • Go to resource or type in Function App to the search box and navigate there via this option.
  • You should see your new function app. with the Status running etc.
  • Click on the app name and you’ll navigate to the apps. page
  • Click on the Create in Azure portal button. You could choose VS Code Desktop or set up your own editor if you prefer
  • We’re going to create an HTTP trigger, which is basically a function which will start-up when an HTTP request comes in for the function, so click HTTP trigger
    • Supply a New Function, I’m naming mine Echo
    • Leave Authorization level as Function OR set to Anonymous for a public API. Azure’s security model for functions is nice and simple, so I’ve chosen Function for this function, but feel free to change to suite
    • When happy with your settings, click Create

If all went well you’re now looking at the Echo function page.

  • Click Code + Test
  • The default .js code is essentially an echo service, but I’m going to change it slightly to the following
    module.exports = async function (context, req) {
      const text = (req.query.text || (req.body && req.body.text));
      context.log('Echo called with ' + text);
      const responseMessage = text
        ? "Echo: " + text
        : "Pass a POST or GET with the text to echo";
    
      context.res = {
        body: responseMessage
      };
    }
    

Let’s now test this function. The easiest way is click the Test/Run option

  • Change the Body to
    {"text":"Scooby Doo"}
    
  • Click Run and if all went well you’ll see Echo: Scooby Doo
  • To test from our browser, let’s get the URL for our function by clicking on the Get function URL
  • The URL will be in the following format and we’ve added the query string to use with it
    https://your-function-appname.azurewebsites.net/api/Echo?code=your-function-key&text=Shaggy
    

If all went well you’ll see Echo: Shaggy and we’ve basically created our simple Azure Function.

Note: Don’t forget to delete your resources when you’ve finished testing this OR use it to create your own code

AWS Lamba

From the AWS Dashboard

  • Type Lambda into the search box
  • Click the button Create function
  • Leave the default as Author from scratch
  • Enter the function name. echo in my case
  • Leave the runtime (this should be Node), architecture etc. as the default
  • Click Create function

Once AWS has done it’s stuff let’s look at the code file index.mjs and change it to

export const handler = async (event, context) => { 
  console.log(JSON.stringify(event));
  const response = {
    statusCode: 200,
    body: JSON.stringify('Echo: ' + event.queryStringParameters.text),
  };
  return response;
};

You’ll need to Deploy the function before it updates to use the latest code but you’ll find that, at this time, you’ll probably get errors use the Test option. One thing we haven’t yet done is supply trigger.

  • Either click Add trigger or from the Configuration tab click Add trigger
  • Select API Gatewway which will add an API to create a HTTP endpoint for REST/HTTP requests
  • If you’ve not created a existing API then select Create a new API
    • We’ll select HTTP API from here
    • I’m not going to create JWT authorizer, so for Security for now, select Open
    • Click the Add button
  • From the Configuration tab you’ll see an API endpoint, in your browser paste the endpoint url and add the query string so it looks a bit like this

    https://end-point.amazonaws.com/default/echo?text=Scooby%20Doo
    

    Note: Don’t forget to delete your functions when you’ve finished testing it OR use it to create your own code

    Google Cloud Function

    From the Google Cloud dashboard

    • Type Cloud Functions into the search box
    • From the Functions page, click Create Function
    • If the Enable required APIs popup appears you’ll need to click ENABLE to ensure all APIs are enabled



    From the Configuration page

    • Set to Environment if required, mine’s defaulted to 2nd gen which is the latest environment
    • Supply the function name, mine’s again echo
    • Set the region to one near your region
    • The default trigger is HTTPS, so we won’t need to change this
    • Just to save on having to setup authentication let’s choose the Allow unauthenticated invocations i.e. making a public API
    • Let’s also copy the URL for now which ill be something like
      https://your-project.cloudfunctions.net/echo
      
    • Clich the Next button



    This defaulted to creating a Node.js runtime. Let’s change our code to the familiar echo code

    • The code should look like the following
      const functions = require('@google-cloud/functions-framework');
      
      functions.http('helloHttp', (req, res) => {
        res.send(`Echo: ${req.query.text || req.body.text}`);
      });
      
    • Click the Test button and GCP will create the container etc.



    Once everything is deployed then change the test payload to

    {
      "text": "Scooby Doo"
    }
    

    and click Run Test. If all went well you’ll see the Echo response in the GCF Testing tab.

    Finally, when ready click Deploy and then we can test our Cloud function via the browser, using the previously copied URL, like this

    https://your-project.cloudfunctions.net/echo?text=Scooby%20Doo
    

    Note: Don’t forget to delete your function(s) when you’ve finished testing this OR use it to create your own code

Deploying your static web site to AWS

Nowadays, if you’re developing a static web site, the old hosting packages you’d get via the web hosting companies now need to compete with offerings from the cloud. Azure, AWS and GCP all offer the ability to host your static pages and ofcourse, why wouldn’t they, it’s just storage and ingress to a web server and depending upon your site requirements, these can be hosted for free.

In this post I’m going to deploy a simply little React app using Material UI that I have, I deployed the same to Azure a long while back (it’s available via https://www.mycountdown.co.uk/). It’s a bit of fun which displays a single countdown to a selected date/time and tells you the number of days, minutes etc. and work days.

  • Go to your AWS console and I’m clicking the Host a static web app option in the Build a solution section of the AWS console
  • I then select GitHub from the From your existing code screen as GitHib is where I host the code for the app.
  • AWS wants permissions to my repo. so I’ll authorize that
  • Next I’m going to only Install & Authorize the one repo. with the countdown code, so I select Only select repositories but select All repositories if you prefer
  • As mentioned, I clicked Only select repositories then I selected my countdown app repo. and finally I click Install & Authorise
  • You may be prompted for further authentication from GitHub, oddly AWS said authentication failed when I was doing this and then AWS changed it’s mind and said it was successful

If all works you’ll be back at AWS in the Add repository branch section. We authorized use of a repo. but may have authorised all repos., so now we choose the the repo. and branch to deploy.

  • When ready click Next
  • Fill in anything required on the next page and then click Next again
  • On the review page, reviews your details and then click Save and deploy when ready

If all goes well you’ll see a message regarding AWS downloading the app. and the site will be provisioned. We now need to wait on AWS to Build and move the progress on to the Deploy step. Once completed you’re site will have been provisioned, built and deployed. A Domain URL is assigned and clicking on this, you should see your site.

MAUI setting the apps dimensions on a Windows Desktop

I have a MAUI application, designed to an extent as mobile-first. In other words it looks good on a mobile, phone or tablet device in full screen mode. But I really want the app to work and look good (on start up) on Windows Desktop. Ofcourse Windows will just give my main app window a size and I need to do whatever I can to ensure the app looks good.

It’s be so much simpler if I could just set the dimensions of my main window (at start up) to something that looks good. If the user resizes, that’s fine but just starting with a good default makes everything look better.

Okay so that’s the problem at hand, how to solve this?

The first thing is, I thought, I could just change the Platforms/Windows/App.xaml.cs, but nothing obvious there to allow this. If we look at App.xaml.cs we can override the CreateWindow method, so let’s do that – here’s the code

public partial class App : Application
{
  public App()
  {
    InitializeComponent();
    MainPage = new AppShell();
  }

  protected override Window CreateWindow(IActivationState? activationState)
  {
    var window = base.CreateWindow(activationState);

#if WINDOWS
    if (DeviceInfo.Idiom == DeviceIdiom.Desktop)
    {
      window.Width = 500;
      window.Height = 700;
    }
#endif

    return window;
  }
}

It’s pretty self-explanatory, we use conditional compilate using #if WINDOWS to only include the code for a Windows build, but as I also have a Windows tablet I figured I’d also check that the idiom is Desktop before I try to set the window dimensions. All pretty straight forward – although not quite.

The problem is that strangely ANDROID and the likes exist a defines, but seemingly not WINDOWS. We can fix this ourselves in one of two ways.

  • Open the .csproj file and for the net7.0-windowsxxx PropertyGroups add the following
    <DefineConstants>$(DefineConstants);WINDOWS</DefineConstants>
    
  • Open the project’s properties, locate the Build | General then for the “Conditional compilation symbols” for net7.0-windowsxxx entries, add
    $(DefineConstants);WINDOWS
    

This doesn’t fully solve things, well it does but Visual Studio will show the code in Create Window as grayed out as if the conditional compilation directive is excluding the code from the build – it’s not, but it looks that way.

Anyway, now we are able to size the main Window on Windows at start-up.

Specflow/Gherkin tags

We’re going to take a look at tags.

We add tags to our features like this, using the @ to prefix a name

@Calculator
Scenario: Calculate two values
# Given/When/Then steps

We can have multiple tags for a scenario, just comma separate them, like this

@Calculator, @Math
Scenario: Calculate two values
# Given/When/Then steps

Great, so what use do I have for tags?

Tags can be used to create documentation, they can be used to for start up and clean up code and they can be uses within the test runners to run groups of tests via their category, you guessed it, denoted by the tag, for example

// run tests on anything tagged Math
dotnet test MyTests.dll --filter Category=Math

// to run tests with both Calculator and Math tags
dotnet test MyTests.dll --filter "Category=Calculator & Category=Math"

// to run tests with either Calculator or Math tags
dotnet test MyTests.dll --filter "Category=Calculator | Category=Math"

The “Custom” control type and WinAppDriver/Appium

So you’ve and application that you want to UI automation test using WinAppDriver/Appium. You’ve got a property grid with the left hand being the text/label and the right hand being the editor. You decided that a cool way to change values on the edit controls is to inspect what the ControlType is, then customise the code to SendKeys or Click or whatever on those controls.

Sound fair?

Well all this is great if your controls are not (as the title of this post suggests) “Custom” controls. So for WPF this is a UserControl or Control. This is fine if we have a single custom control but no so good if we have multiple custom control types.

This issue raise it’s head due to a HorizontalToggle control which we’re importing into our application via a NuGet package. The control derives from Control and is pretty much invisible to the UI Automation code apart from one Automation Id “SwithThumb”. So to fix this I wrapped the control in a UserControl and added an AutomationProperties.AutomationId attached property. Ofcourse, we could get the source if it’s available and change the code ourselves, but then we’ll have to handle upgrades etc. which may or may not be an issue in the future.

That’s great, now I can see the control but I have some generic code that wants to know the control type, so what can we do on this front?

The truth is we’re still quite limited in what we can do, if we’re getting all elements and trying to decide what to do based upon the ControlType. TextBoxes are Edit control types, Buttons are Button control types, but UserControls are still Custom control types.

Whilst this is NOT a perfect solutions, we can derive a class from a UserControl (which will still be used to wrap the original control), let’s call ours HorizontalToggleControl and it looks like this

public class HorizontalToggleControl : UserControl
{
   protected override AutomationPeer OnCreateAutomationPeer() => 
      new HorizontalToggleControlAutomationPeer(this);
}

What we’re doing here is taking over the OnCreateAutomationPeer and supplying our own automation peer, which will itself allow us to override some of the automation properties, specifically in our case the GetAutomationControlTypeCore.

My HorizontalToggleControlAutomationPeer class looks like this

internal class HorizontalToggleControlAutomationPeer : 
   UserControlAutomationPeer
{
   public HorizontalToggleControlAutomationPeer(UserControl owner) :
      base(owner)
   {
   }

   protected override AutomationControlType GetAutomationControlTypeCore() => 
      AutomationControlType.Thumb;

   protected override string GetLocalizedControlTypeCore() =>
      nameof(HorizontalToggleControl);

}

Now what’s happening in the above code is the we’re creating a localized control name “HorizontalToggleControl”, ofcourse this could literally be localised and read from the resources, but in our case we’re sticking with the actual control name. This, unfortunately is still no use to us as the ControlType in an element will still read as Custom. Changing the GetAutomationControlTypeCore return value fixes this but at the expense of only being able to set the control type to one of the AutomationControlType enums. So it’s of limited use, but as mentioned previously, we only really see the SwitchThumb automation id on the original control and so, Thumb seemed like a possible control type. In reality we might prefer CheckBox, but ofcourse the downside here is if we have check box elements, we’d need to ensure we also look at the automation name or property to determine what type of check box this is, a real Windows one or one that acts like a check box. Either way of doing this is fine.

Is your Universal Windows application running on a device which supports this hardware ?

Just going through some old draft posts and found this one, which might be of use to somebody. Let’s call it a Quick Post as there’s not too much substance…

When writing a Universal Windows application we’re basically trying to write code that will work on multiple devices. But different devices have different capabilities. For example a mobile phone has a back button, so we might want to handle the back button BackPressed event in some way, but this event is not available when the application is run on a desktop machine.

Obviously it’d be no good using #define to enable/disable code as we want the application’s code to be universal and run “as-is” on multiple devices. So we need a method call at runtime to tell us whether the device supports the BackButton. Or more specifically whether it supports the HardwareButtons input mechanism.

So to check whether we can hook up code to the BackPressed event we might code the following

if(ApiInformation.IsTypePresent("Windows.Phone.UI.Input.HardwareButtons"))
{
   HardwareButtons.BackPressed += HandleBackPressed;
}

Running your own Question & Answer site

The team I’m currently on wanted to run a Q&A type of site, internally within the workplace. We have Stack Overflow onsite, but we were looking for something specific to our application. Some research and trials later, I came across Answer. Unlike some solutions I tested, this came with a working Docker configuration, that ran as easily as

docker run -d -p 9080:80 -v answer-data:/data --name answer answerdev/answer:latest

If you want to have people register themselves you’ll need to set up the SMTP configuration or I think there’s a module for another type of authentication. For our use, I simply ran the docker command and logged in as admin, then added other users. It looks good and so far.

When you need to restart it just run the usual docker command

docker start answer

Currently I’ve it up for the SQLite data store. Volumes, by default are stored

On Windows in

\\wsl.localhost\docker-desktop-data\version-pack-data\community\docker\volumes\answer-data\_data

On Linux in

/var/lib/docker/volumes