Author Archives: purpleblob

Azure Functions, AWS Lambda Functions, Google Cloud Functions

Some companies, due to regulatory requirements, a desire to not get locked into one cloud vendor or the likes, look towards a multi-cloud strategy. With this in mind this post is the first of a few showing some of the same functionality (but with different names) across the top three cloud providers, Microsoft’s Azure, Amazon’s AWS and Google Cloud.

We’re going to start with the serverless technology known as Lambda Functions (in AWS, and I think they might have been the first), Azure Functions and the Google cloud equivalent Google Cloud Functions. Now, in truth the three may not be 100% compatible in terms of their API but they’re generally close enough to allow us to worry about the request and response only and keep the same code for the specific function. Ofcourse if your function uses DB’s or Queues, then you’re probably starting to get tied more to the vendor than the intention of this post.

I’ve already covered Azure Functions in the past, but let’s revisit, so we can compare and contrast the offerings.

I’m not too interested in the code we’re going to deploy, so we’ll stick with JavaScript for each provider and just write a simple echo service, i.e. we send in a value and it responds with the value preceded by “Echo: ” (we can look at more complex stuff in subsequent posts).

Note: We’re going to use the UI/Dashboard to create our functions in this post.

Azure Functions

From Azure’s Dashboard

  • Type Function App into the search box or select it from your Dashboard if it’s visible
  • From the Function App page click the Create Function App button
  • From the Create Function App screen
    • Select your subscription and resource group OR create a new resource group
    • Supply a Function app name. This is essentially our apps name, as the Function app can hold multiple functions. The name must be unique across Azure websites
    • Select Code. So we’re just going to code the functions in Azure not supply an image
    • Select a runtime stack, let’s choose Node.js
    • Select the version (I’m sticking with the default)
    • Select the region, look for the region closest to you
    • Select the Operating System, I’m going to leave this as the default Windows
    • I’ve left the Hosting to the default Consumption (Serverless)
  • Click Review + create
  • If you’re happy, now click Create

Once Azure has done it’s stuff, we’ll have a resource and associated resources created for our functions.

  • Go to resource or type in Function App to the search box and navigate there via this option.
  • You should see your new function app. with the Status running etc.
  • Click on the app name and you’ll navigate to the apps. page
  • Click on the Create in Azure portal button. You could choose VS Code Desktop or set up your own editor if you prefer
  • We’re going to create an HTTP trigger, which is basically a function which will start-up when an HTTP request comes in for the function, so click HTTP trigger
    • Supply a New Function, I’m naming mine Echo
    • Leave Authorization level as Function OR set to Anonymous for a public API. Azure’s security model for functions is nice and simple, so I’ve chosen Function for this function, but feel free to change to suite
    • When happy with your settings, click Create

If all went well you’re now looking at the Echo function page.

  • Click Code + Test
  • The default .js code is essentially an echo service, but I’m going to change it slightly to the following
    module.exports = async function (context, req) {
      const text = (req.query.text || (req.body && req.body.text));
      context.log('Echo called with ' + text);
      const responseMessage = text
        ? "Echo: " + text
        : "Pass a POST or GET with the text to echo";
    
      context.res = {
        body: responseMessage
      };
    }
    

Let’s now test this function. The easiest way is click the Test/Run option

  • Change the Body to
    {"text":"Scooby Doo"}
    
  • Click Run and if all went well you’ll see Echo: Scooby Doo
  • To test from our browser, let’s get the URL for our function by clicking on the Get function URL
  • The URL will be in the following format and we’ve added the query string to use with it
    https://your-function-appname.azurewebsites.net/api/Echo?code=your-function-key&text=Shaggy
    

If all went well you’ll see Echo: Shaggy and we’ve basically created our simple Azure Function.

Note: Don’t forget to delete your resources when you’ve finished testing this OR use it to create your own code

AWS Lamba

From the AWS Dashboard

  • Type Lambda into the search box
  • Click the button Create function
  • Leave the default as Author from scratch
  • Enter the function name. echo in my case
  • Leave the runtime (this should be Node), architecture etc. as the default
  • Click Create function

Once AWS has done it’s stuff let’s look at the code file index.mjs and change it to

export const handler = async (event, context) => { 
  console.log(JSON.stringify(event));
  const response = {
    statusCode: 200,
    body: JSON.stringify('Echo: ' + event.queryStringParameters.text),
  };
  return response;
};

You’ll need to Deploy the function before it updates to use the latest code but you’ll find that, at this time, you’ll probably get errors use the Test option. One thing we haven’t yet done is supply trigger.

  • Either click Add trigger or from the Configuration tab click Add trigger
  • Select API Gatewway which will add an API to create a HTTP endpoint for REST/HTTP requests
  • If you’ve not created a existing API then select Create a new API
    • We’ll select HTTP API from here
    • I’m not going to create JWT authorizer, so for Security for now, select Open
    • Click the Add button
  • From the Configuration tab you’ll see an API endpoint, in your browser paste the endpoint url and add the query string so it looks a bit like this

    https://end-point.amazonaws.com/default/echo?text=Scooby%20Doo
    

    Note: Don’t forget to delete your functions when you’ve finished testing it OR use it to create your own code

    Google Cloud Function

    From the Google Cloud dashboard

    • Type Cloud Functions into the search box
    • From the Functions page, click Create Function
    • If the Enable required APIs popup appears you’ll need to click ENABLE to ensure all APIs are enabled



    From the Configuration page

    • Set to Environment if required, mine’s defaulted to 2nd gen which is the latest environment
    • Supply the function name, mine’s again echo
    • Set the region to one near your region
    • The default trigger is HTTPS, so we won’t need to change this
    • Just to save on having to setup authentication let’s choose the Allow unauthenticated invocations i.e. making a public API
    • Let’s also copy the URL for now which ill be something like
      https://your-project.cloudfunctions.net/echo
      
    • Clich the Next button



    This defaulted to creating a Node.js runtime. Let’s change our code to the familiar echo code

    • The code should look like the following
      const functions = require('@google-cloud/functions-framework');
      
      functions.http('helloHttp', (req, res) => {
        res.send(`Echo: ${req.query.text || req.body.text}`);
      });
      
    • Click the Test button and GCP will create the container etc.



    Once everything is deployed then change the test payload to

    {
      "text": "Scooby Doo"
    }
    

    and click Run Test. If all went well you’ll see the Echo response in the GCF Testing tab.

    Finally, when ready click Deploy and then we can test our Cloud function via the browser, using the previously copied URL, like this

    https://your-project.cloudfunctions.net/echo?text=Scooby%20Doo
    

    Note: Don’t forget to delete your function(s) when you’ve finished testing this OR use it to create your own code

Deploying your static web site to AWS

Nowadays, if you’re developing a static web site, the old hosting packages you’d get via the web hosting companies now need to compete with offerings from the cloud. Azure, AWS and GCP all offer the ability to host your static pages and ofcourse, why wouldn’t they, it’s just storage and ingress to a web server and depending upon your site requirements, these can be hosted for free.

In this post I’m going to deploy a simply little React app using Material UI that I have, I deployed the same to Azure a long while back (it’s available via https://www.mycountdown.co.uk/). It’s a bit of fun which displays a single countdown to a selected date/time and tells you the number of days, minutes etc. and work days.

  • Go to your AWS console and I’m clicking the Host a static web app option in the Build a solution section of the AWS console
  • I then select GitHub from the From your existing code screen as GitHib is where I host the code for the app.
  • AWS wants permissions to my repo. so I’ll authorize that
  • Next I’m going to only Install & Authorize the one repo. with the countdown code, so I select Only select repositories but select All repositories if you prefer
  • As mentioned, I clicked Only select repositories then I selected my countdown app repo. and finally I click Install & Authorise
  • You may be prompted for further authentication from GitHub, oddly AWS said authentication failed when I was doing this and then AWS changed it’s mind and said it was successful

If all works you’ll be back at AWS in the Add repository branch section. We authorized use of a repo. but may have authorised all repos., so now we choose the the repo. and branch to deploy.

  • When ready click Next
  • Fill in anything required on the next page and then click Next again
  • On the review page, reviews your details and then click Save and deploy when ready

If all goes well you’ll see a message regarding AWS downloading the app. and the site will be provisioned. We now need to wait on AWS to Build and move the progress on to the Deploy step. Once completed you’re site will have been provisioned, built and deployed. A Domain URL is assigned and clicking on this, you should see your site.

MAUI setting the apps dimensions on a Windows Desktop

I have a MAUI application, designed to an extent as mobile-first. In other words it looks good on a mobile, phone or tablet device in full screen mode. But I really want the app to work and look good (on start up) on Windows Desktop. Ofcourse Windows will just give my main app window a size and I need to do whatever I can to ensure the app looks good.

It’s be so much simpler if I could just set the dimensions of my main window (at start up) to something that looks good. If the user resizes, that’s fine but just starting with a good default makes everything look better.

Okay so that’s the problem at hand, how to solve this?

The first thing is, I thought, I could just change the Platforms/Windows/App.xaml.cs, but nothing obvious there to allow this. If we look at App.xaml.cs we can override the CreateWindow method, so let’s do that – here’s the code

public partial class App : Application
{
  public App()
  {
    InitializeComponent();
    MainPage = new AppShell();
  }

  protected override Window CreateWindow(IActivationState? activationState)
  {
    var window = base.CreateWindow(activationState);

#if WINDOWS
    if (DeviceInfo.Idiom == DeviceIdiom.Desktop)
    {
      window.Width = 500;
      window.Height = 700;
    }
#endif

    return window;
  }
}

It’s pretty self-explanatory, we use conditional compilate using #if WINDOWS to only include the code for a Windows build, but as I also have a Windows tablet I figured I’d also check that the idiom is Desktop before I try to set the window dimensions. All pretty straight forward – although not quite.

The problem is that strangely ANDROID and the likes exist a defines, but seemingly not WINDOWS. We can fix this ourselves in one of two ways.

  • Open the .csproj file and for the net7.0-windowsxxx PropertyGroups add the following
    <DefineConstants>$(DefineConstants);WINDOWS</DefineConstants>
    
  • Open the project’s properties, locate the Build | General then for the “Conditional compilation symbols” for net7.0-windowsxxx entries, add
    $(DefineConstants);WINDOWS
    

This doesn’t fully solve things, well it does but Visual Studio will show the code in Create Window as grayed out as if the conditional compilation directive is excluding the code from the build – it’s not, but it looks that way.

Anyway, now we are able to size the main Window on Windows at start-up.

Specflow/Gherkin tags

We’re going to take a look at tags.

We add tags to our features like this, using the @ to prefix a name

@Calculator
Scenario: Calculate two values
# Given/When/Then steps

We can have multiple tags for a scenario, just comma separate them, like this

@Calculator, @Math
Scenario: Calculate two values
# Given/When/Then steps

Great, so what use do I have for tags?

Tags can be used to create documentation, they can be used to for start up and clean up code and they can be uses within the test runners to run groups of tests via their category, you guessed it, denoted by the tag, for example

// run tests on anything tagged Math
dotnet test MyTests.dll --filter Category=Math

// to run tests with both Calculator and Math tags
dotnet test MyTests.dll --filter "Category=Calculator & Category=Math"

// to run tests with either Calculator or Math tags
dotnet test MyTests.dll --filter "Category=Calculator | Category=Math"

The “Custom” control type and WinAppDriver/Appium

So you’ve and application that you want to UI automation test using WinAppDriver/Appium. You’ve got a property grid with the left hand being the text/label and the right hand being the editor. You decided that a cool way to change values on the edit controls is to inspect what the ControlType is, then customise the code to SendKeys or Click or whatever on those controls.

Sound fair?

Well all this is great if your controls are not (as the title of this post suggests) “Custom” controls. So for WPF this is a UserControl or Control. This is fine if we have a single custom control but no so good if we have multiple custom control types.

This issue raise it’s head due to a HorizontalToggle control which we’re importing into our application via a NuGet package. The control derives from Control and is pretty much invisible to the UI Automation code apart from one Automation Id “SwithThumb”. So to fix this I wrapped the control in a UserControl and added an AutomationProperties.AutomationId attached property. Ofcourse, we could get the source if it’s available and change the code ourselves, but then we’ll have to handle upgrades etc. which may or may not be an issue in the future.

That’s great, now I can see the control but I have some generic code that wants to know the control type, so what can we do on this front?

The truth is we’re still quite limited in what we can do, if we’re getting all elements and trying to decide what to do based upon the ControlType. TextBoxes are Edit control types, Buttons are Button control types, but UserControls are still Custom control types.

Whilst this is NOT a perfect solutions, we can derive a class from a UserControl (which will still be used to wrap the original control), let’s call ours HorizontalToggleControl and it looks like this

public class HorizontalToggleControl : UserControl
{
   protected override AutomationPeer OnCreateAutomationPeer() => 
      new HorizontalToggleControlAutomationPeer(this);
}

What we’re doing here is taking over the OnCreateAutomationPeer and supplying our own automation peer, which will itself allow us to override some of the automation properties, specifically in our case the GetAutomationControlTypeCore.

My HorizontalToggleControlAutomationPeer class looks like this

internal class HorizontalToggleControlAutomationPeer : 
   UserControlAutomationPeer
{
   public HorizontalToggleControlAutomationPeer(UserControl owner) :
      base(owner)
   {
   }

   protected override AutomationControlType GetAutomationControlTypeCore() => 
      AutomationControlType.Thumb;

   protected override string GetLocalizedControlTypeCore() =>
      nameof(HorizontalToggleControl);

}

Now what’s happening in the above code is the we’re creating a localized control name “HorizontalToggleControl”, ofcourse this could literally be localised and read from the resources, but in our case we’re sticking with the actual control name. This, unfortunately is still no use to us as the ControlType in an element will still read as Custom. Changing the GetAutomationControlTypeCore return value fixes this but at the expense of only being able to set the control type to one of the AutomationControlType enums. So it’s of limited use, but as mentioned previously, we only really see the SwitchThumb automation id on the original control and so, Thumb seemed like a possible control type. In reality we might prefer CheckBox, but ofcourse the downside here is if we have check box elements, we’d need to ensure we also look at the automation name or property to determine what type of check box this is, a real Windows one or one that acts like a check box. Either way of doing this is fine.

Is your Universal Windows application running on a device which supports this hardware ?

Just going through some old draft posts and found this one, which might be of use to somebody. Let’s call it a Quick Post as there’s not too much substance…

When writing a Universal Windows application we’re basically trying to write code that will work on multiple devices. But different devices have different capabilities. For example a mobile phone has a back button, so we might want to handle the back button BackPressed event in some way, but this event is not available when the application is run on a desktop machine.

Obviously it’d be no good using #define to enable/disable code as we want the application’s code to be universal and run “as-is” on multiple devices. So we need a method call at runtime to tell us whether the device supports the BackButton. Or more specifically whether it supports the HardwareButtons input mechanism.

So to check whether we can hook up code to the BackPressed event we might code the following

if(ApiInformation.IsTypePresent("Windows.Phone.UI.Input.HardwareButtons"))
{
   HardwareButtons.BackPressed += HandleBackPressed;
}

Running your own Question & Answer site

The team I’m currently on wanted to run a Q&A type of site, internally within the workplace. We have Stack Overflow onsite, but we were looking for something specific to our application. Some research and trials later, I came across Answer. Unlike some solutions I tested, this came with a working Docker configuration, that ran as easily as

docker run -d -p 9080:80 -v answer-data:/data --name answer answerdev/answer:latest

If you want to have people register themselves you’ll need to set up the SMTP configuration or I think there’s a module for another type of authentication. For our use, I simply ran the docker command and logged in as admin, then added other users. It looks good and so far.

When you need to restart it just run the usual docker command

docker start answer

Currently I’ve it up for the SQLite data store. Volumes, by default are stored

On Windows in

\\wsl.localhost\docker-desktop-data\version-pack-data\community\docker\volumes\answer-data\_data

On Linux in

/var/lib/docker/volumes

Multilingual support for a ASP.NET web API application

We sometimes wish to make our web API return error messages or other types of string data in different languages. The process for this is similar to MAUI and WinForms, we just do the following

  • Create a folder names Resources in our web APIT project
  • Add a RESX file, which we’ll name AppResources.resx. This will be the default language, so in my case this will include en-GB strings
  • Ensure the file has a Build Action of Embedded resource and Custom Tool of ResXFileCodeGenerator
  • Add a name (which is the key to your resource string) and then add the value. This is the string (i.e. the translated string) for the given key
  • Let’s add another RESX file, but this type name it AppResources.{language identifier}.resx, for example AppResources.de-DE.resx which will contain the German translation of the key/name’s
  • Again ensure the Build Action and Custom Tool are correctly set

The ResXFileCodeGenerator will generate properties in the AppResources class for us to access the resource strings. For example

AppResources.ExceptionMessage

If we need to test our translations without changing our OS language, we simply use code such as the following in the Program.cs of the web API

AppResources.Culture = new CultureInfo("de-DE");

Azure Functions

Azure functions (like AWS lambdas and GCP cloud functions) allow us to write serverless code literally just as functions, i.e. no need to fire up a web application or VM. Ofcourse just like Azure containers, there is a server component but we, the developer, need not concerns ourselves with handling configuration etc.

Azure functions will be spun up as and when required, meaning we will only be charged when they’re used. The downside of this is they have to spin up from a “cold” state. In other words the first person to hit your function will likely incur a performance hit whilst the function is started then invoked.

The other thing to remember is Azure functions are stateless. You might store state with a DB like CosmoDB, but essentially a function is invoked, does something then after a timeout period it’s shut back down.

Let’s create an example function and see how things work…

  • Create a new Azure Functions project
  • When you get to the options for the Function, select Http trigger and select Amonymous Authorization level
  • Complete the wizard by clicking the Create button

The Authorization level allows the function to be triggered without providing a key. The HTTP trigger, as it sounds, means the function is triggered by an HTTP request.

The following is basically the code that’s created from the Azure Function template

public static class ExampleFunction
{
  [FunctionName("Example")]
  public static async Task<IActionResult> Run(
        [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
        ILogger log)
  {
    log.LogInformation("HTTP trigger function processed a request.");

    string name = req.Query["name"];

    var requestBody = await new StreamReader(req.Body).ReadToEndAsync();
    dynamic data = JsonConvert.DeserializeObject(requestBody);
    name = name ?? data?.name;

    var responseMessage = string.IsNullOrEmpty(name) 
      ? "Pass a name in the query string or in the request body for a personalized response."
            : $"Hello, {name}. This HTTP triggered function executed successfully.";

    return new OkObjectResult(responseMessage);
  }
}

We can actually run this and debug via Visual Studio in the normal way. We’ll get a URL supplied, something like this http://localhost:7071/api/Example to access our function.

As you can see from the above code, we’ll get passed an ILogger and an HttpRequest. From this we can get query parameters, so this URL above would be used like this http://localhost:7071/api/Example?name=PutridParrot

Ofcourse the whole purpose of the Azure Function is for it to run on Azure. To publish it…

  • From Visual Studio, right mouse click on the project and select Publish
  • For the target, select Azure. Click Next
  • Select Azure Function App (Windows) or Linux if you prefer. Click Next again
  • Either select a Function instance if one already exist or you can create a new instance from this wizard page

If you’re creating a new instance, select the resource group etc. as usual and then click Create when ready.

Note: I chose Consumption plan, which is the default when creating an Azure Functions instance. This is basically a “pay only for executions of your functions app”, so should be the cheapest plan.

The next step is to Finish the publish process. If all went well you’ll see everything configures and you can close the Publish dialog.

From the Azure dashboard you can simply type into the search textbox Function App and you should see the published function with a status of Running. If you click on the function name it will show you the current status of the function as well as it’s URL which we can access like we did with localhost, i.e.

https://myfunctionssomewhere.azurewebsites.net/api/Example?name=PutridParrot

Blazor and the GetFromJsonAsync exception TypeError: Failed to Fetch

I have an Azure hosted web api. I also have a simple Blazor standalone application that’s meant to call the API to get a list of categories to display. i.e. the Blazor app is meant to call the Azure web api, fetch the data and display it – should be easy enough, right ?

The web api can easily accessed via a web browser or a console app using the .NET HttpClient, but the Blazor code using the following simply kept throwing an exception with the cryptic message “TypeError: Failed to Fetch”

@inject HttpClient Http

// Blazor and other code

protected override async Task OnInitializedAsync()
{
   try
   {
      _categories = await Http.GetFromJsonAsync<string[]>("categories");
   }
   catch (Exception e)
   {
      Debug.WriteLine(e);
   }
}

What was happening is I was actually getting a CORS error, sadly not really reported via the exception so not exactly obvious.

If you get this error interacting with your web api via Blazor then go to the Azure dashboard. I’m running my web api as a container app, type CORS into the left search bar of the resource (in my case a Container App). you should see the Settings section CORS subsection.

Add * to the Allowed Origins and click apply.

Now your Blazor app should be able to interact with the Azure web api app.