Monthly Archives: October 2022

Handling “unhandled” exceptions in WPF

Note: This post was written a while back but sat in draft. I’ve published this now, but I’m not sure it’s relevant to the latest versions etc. so please bear this in mind.

None of us want our applications to just crash when an exception occurs so we often ensure we’ve got catch blocks around possible exceptions, but ofcourse, sometimes we either don’t care to handle an exception explicitly or we forget to code the catch block or the place the exception occurs such that it’s not possible to handle in such a structured way. In such scenarios we want to handle all “unhandled” exceptions at the application level.

Let’s take a look at some of the ways to handle exceptions in a WPF application. Here’s a list some of those ways to handle “unhandled” exceptions.

AppDomain.UnhandledException

The AppDomain.UnhandledException or more specifically AppDomain.CurrentDomain.UnhandledException.

Application.DispatcherUnhandledException

The Application.DispatcherUnhandledException or more specifically the Application.Current.DispatcherUnhandledException

Dispatcher.UnhandledException

The Dispatcher.UnhandledException or more specifically Dispatcher.CurrentDispatcher.UnhandledException

AppDomain.FirstChanceException

The AppDomain.FirstChanceException or more specifically AppDomain.CurrentDomain.FirstChanceException.

TaskScheduler.UnobservedTaskException

The TaskScheduler.UnobservedTaskException

Github action build scripts for various languages

I’ve been through a bit of a wave of writing my unit of measurement library code for various programming languages, originally starting in F# through C# and Java to Go, Rust, Swift, Typescript and Python. Each time I’ve needed/wanted to create a Github actions build workflow for each language. To be honest Github gives you all the information you need, but I’m going to list what I’ve got and what worked for me etc.

Creating the workflow file

I’ve covered much of this before, but I’ll recreate here for completeness.

You’ll need to start by following these steps

  • Create a .github folder in the root of your github repository
  • Within the .github folder create a workflows folder
  • Within the workflows folder create a file – it can be named whatever you like, let’s use build.yml (yes it’s a YAML file)

The workflow configuration

Note: YAML is a format where whitespace is significant. In the code snippets below I will left justify the code, basically breaking the format but making it easier to read. At the end of the post I’ll put all the snippets together to show the correct format, if you’re just here for the code, then I’d suggest scrolling to the bottom of the post.

Your build.yml (if you followed my naming) will start with the name of the workflow, so let’s simply name it Build, i.e.

name: Build

Next up we need to list the events that cause the workflow to start, usually this will be things like “on push” or “on pull_request”. So now we add the following to the .yml file

on:
  push:
    branches: [ master ]
  pull_request:
    branches: [ master ]
  workflow_dispatch: # Manual Run

As Github has moved away from using master for the root branch to main obviously change master to main in the above. I tend to (at least initially) also include a way to manually run the workflow if I need to, hence include workflow_dispatch.

So we have a name of the workflow and the events that trigger the workflow, now we need the tasks or in Github action terms, the jobs to run. So we’ll just add the following, to our .yml

jobs:
   build:

Now for the next step, we can either simply list the VM OS to run on using

runs-on: ubuntu-latest

Or, as all the languages listed at the start of this post are cross platform, we might want to try building them on each OS we want to support, in this case we create a strategy and matrix of OS’s to support. It’d also be good to see in the build what OS we’re building for so we’ll create the following

name: Build on ${{ matrix.os }}
strategy:
  matrix:
    os: [macos-latest, ubuntu-latest, windows-latest]
runs-on: ${{ matrix.os }}

In the above, we’ll try to run our build on Mac, Ubuntu and Windows latest OS environments. If you only care about one environment you can use the previous runs-on example or just reduce the os array to the single environment.

Note: If you take a look at virtual-environments, you’ll see a list of supported TAML labels for the OS environment.

Within the matrix we can declare variables that are used in runs-on but also for steps, so for example if we’re wanted to list the version(s) that we want to support of a toolchain or language, we could do something like

matrix:
  os: [macos-latest, ubuntu-latest, windows-latest]
  go-version: [1.15.x]

So now we have matrix.go-version available for deciding which version of Go (for example) we’ll install.

We’ve setup the environments now we need to actually list the steps of our build. The first step is to checkout the code, so we’ll have something like this

steps:
- uses: actions/checkout@v2

Next we’re create each step of the build process, usually with a name for each step to make it easier to see what’s happening as part of the build workflow. At this point we’ll start to get into the specifics for each toolchain and language. For example for F# and C# we’ll start by setting up dotnet. For Swift we can just start calling the swift CLI. So let’s go through each toolchain/language setup I have for my unit conversion libraries/packages.

.NET steps

For .NET we’ll need to setup .NET with the version(s) we want to build against. Then install any required nuget dependencies, then build our code, run any tests and if required package up and even deploy to NuGet (which is covered in my post Deploying my library to Github packages using Github actions).

- name: Setup .NET Core SDK 6.0.x
  uses: actions/setup-dotnet@v1
  with:
    dotnet-version: '6.0.x'
- name: Install dependencies
  run: dotnet restore
- name: Build
  run: dotnet build --configuration Release --no-restore
- name: Test
  run: dotnet test FSharp.Units.Tests/FSharp.Units.Tests.fsproj --no-restore --verbosity normal
- name: Create Package
  run: dotnet pack --configuration Release

Note: Ofcourse change the FSharp.Units.Tests/FSharp.Units.Tests.fsproj to point to your test project(s).

Java steps

For Java we’re just going to have a setup step and then use Maven to build and run tests

- name: Setup JDK 8
  uses: actions/setup-node@v1
  with:
    java-version: '8'
    distribution: 'adopt'
- name: Build with Maven
  run: mvn --batch-mode --update-snapshots verify
[code]

<strong>Go steps</strong>

With Go we're using a previously declared <em>go-version</em> to allow us to target the version of Go we want to setup, ofcourse we could do the same for .NET, Java etc. Next we're installing dependencies, basically I want to install golint to lint my code. Next up we build our code using the Go CLI then vet and lint before finally running the tests.

[code]
- name: Setup Go
  uses: actions/setup-go@v2
  with:
    go-version: ${{ matrix.go-version }}
- name: Install dependencies
  run: |
    go version
    go get -u golang.org/x/lint/golint
- name: Build
  run: go build -v ./...
- name: Run vet & lint
  run: |
    go vet ./...
    golint ./...
- name: Run testing
  run: go test -v ./...

Rust steps

For Rust we’ll keep the steps pretty simple

- name: Build
  run: cargo build --verbose
- name: Run tests
  run: cargo test --verbose

Swift steps

- name: Build
  run: cargo build --verbose
- name: Run tests
  run: cargo test --verbose

Typescript steps

- name: Use Node.js
  uses: actions/setup-node@v1
  with:
    node-version: '12.x'
- name: Install dependencies
  run: yarn
- run: yarn run build
- run: yarn run test

Python steps

- name: Set up Python
  uses: actions/setup-python@v2
  with:
    python-version: '3.x'
- name: Test with pytest
  run: |
    python -m unittest

Google Play Testing Releases

Note: This post was written a while back but sat in draft. I’ve published this now, but I’m not sure it’s relevant to the latest versions etc. so please bear this in mind.

The Google Play console allows you to submit your application to the Google Play store. Obviously you’ll need to go through the store for a Production release but the Google Play Console also allows you to go through testing phases with your application in a controlled manner (as opposed to just supply a .apk from your own web site or similar).

There are three testing options, Internal, Closed and Open.

Internal Testing

Internal testing allows us to pretty much just upload our .aab to the Google Play Console, supply a list of tester’s emails (these seem to need to be gmail email apart from the dev owner of the account). There is a limit of a maximum of 100 testers for Internal testing.

Once uploaded and assuming you’re supplied required information (as will be highlighted by the Internal testing page) and you’ve set up your list of testers, you can simply send a link to each tester and they will then be given access to your application (obviously the email they sign into Google Play will need to match the one supplied in your list of testers).

Internal testing is very useful in that you can put an application out to test with minimal “form filling”, no need for an application name even, or descriptions or screen shots etc.

Google Play will run a cutdown Firebase Test Lab (I believe) which will run some UI testing on your application against different devices and OS versions, take screens shots and even video on the automated interactions – this is extraordinarily useful. A repot (see Pre-launch report) will also display performance issues, accessibility etc.

Closed Testing

Closed testing can be seen as, maybe, an alpha release. It’s similar to Internal testing in that you again dictate who can access your application, but unlike Internal Testing you will need to supply assets along the lines of what your final release would look like, i.e. description of the app, images, screenshots etc.

Open Testing

One might see this as a beta release phase, after Internal and/or Closed testing you may want to open the application to a larger group of potential testers – at this stage anyone can join your test programme. If you’ve gone straight from Internal to Open testing, as per Closed Testing, you will need to supply assets, descriptions etc. that will be displayed in the Google Play store.

References

Set up an open, closed or internal test

Trying out Avalonia

Note: This post was written a while back but sat in draft. I’ve published this now, but I’m not sure it’s relevant to the latest versions etc. so please bear this in mind.

Avalonia is aiming to support a XAML way of implementing cross platform UI’s. Whilst I can use Xamarin Forms for developing on iOS, Android etc. it doesn’t currently support Linux.

So let’s have a look at implementing a very basic Hello World application using Avalonia. This post covers using Visual Studio 2017 (obviously on Windows).

  • Creating a WPF application, mine’s named HelloAvaloniaWorld
  • Remove the following references
    • PresentationCore
    • PresentationFrameworj
    • WindowsBase

    as we’re not using WPF

  • Add NuGet packages
    • Avalonia
    • Avalonia.Desktop

Now replace App.xaml with the following

<Application xmlns="https://github.com/avaloniaui">
    <Application.Styles>
        <StyleInclude Source="resm:Avalonia.Themes.Default.DefaultTheme.xaml?assembly=Avalonia.Themes.Default"/>
        <StyleInclude Source="resm:Avalonia.Themes.Default.Accents.BaseLight.xaml?assembly=Avalonia.Themes.Default"/>
    </Application.Styles>
</Application>

Replace the application class in App.xaml.cs with

using Avalonia;
using Avalonia.Markup.Xaml;

namespace HelloAvaloniaWorld
{
    public class App : Application
    {
        public override void Initialize()
        {
            AvaloniaXamlLoader.Load(this);
        }

        private static void Main()
        {
            AppBuilder.Configure<App>()
                .UsePlatformDetect()
                .Start<MainWindow>();
        }
    }
}

Let’s now alter the MainWindow.xaml to look like

<Window xmlns="https://github.com/avaloniaui">
    <Grid>
        <TextBlock FontSize="16" Text="Hello World"/>
    </Grid>
</Window>

And replace the MainWindow.xaml.cs contents with

using Avalonia;
using Avalonia.Controls;
using Avalonia.Markup.Xaml;

namespace HelloAvaloniaWorld
{
    public class MainWindow : Window
    {
        public MainWindow()
        {
            this.InitializeComponent();
            this.AttachDevTools();
        }

        private void InitializeComponent()
        {
            AvaloniaXamlLoader.Load(this);
        }
    }
}

Before we move on, select each XAML file and display the file properties in Visual Studio, from here, remove the Custom Tool and change the Build Action to Embedded Resource.

From AssemblyInfo.cs remove the section

[assembly: ThemeInfo(...)]

Before the application builds you’ll need to copy the packages\SkiaSharp.1.57.1\build and runtime folders to the solution root folder, into a folder named SkiaSharp\1.57.1.

Device specific code in Xamarin

Note: This post was written a while back but sat in draft. I’ve published this now, but I’m not sure it’s relevant to the latest versions etc. so please bear this in mind.

Occasionally we might need to handle text or UI controls slightly differently on different devices.

In a shared code project we’d probably look to write code using conditional compilation, but this wouldn’t work for PCL code. Also conditional compilation doesn’t work in XAML. So whilst conditional compilation is still a valid technique for shared code projects an alternative is runtime branching based upon the device the code is running on (in other words if or switch code). Xamarin already supplies us with a simple mechanism to achieve this using the OnPlatform method on the Device class.

PCL/Runtime device specific code

So, in code we can use the Device class like this

public class MyPage : ContentPage
{
   public MyPage()
   {
      Padding = Device.OnPlatform(
          new Thickness(0,20,0,0), 
          new Thickness(0),
          new Thickness(0));
   }
}

Or from XAML we can use the OnPlatform element, for example

<ContentPage>
   <ContentPage.Padding>
      <OnPlatform x:TypeArguments="Thickness" iOS="0,20,0,0" />
   </ContentPage.Padding>
</ContentPage>

Conditional Compilation directives

Just to complete the picture, let’s look at what compiler directives exist – these are not exclusive to shared code projects but obviously are available for any use

  • __IOS__ for iOS code
  • __ANDROID__ for Android code
  • __ANDROID_nn__ for Android code, where nn is the Andoird API level supported
  • WINDOWS_UWP for Universal Windows Platform code
  • WINDOWS_APP for Windows 8.1 code
  • WINDOWS_PHONE_APP for Windows Phone 8.1 code

Cors and expressjs

Note: This post was written a while back but sat in draft. I’ve published this now, but I’m not sure it’s relevant to the latest versions etc. so please bear this in mind.

This is a simple reminder post for myself, please see cors for complete documentation.

To enable CORS within expressjs, add the package

yarn add cors

The import using

import cors from "cors";

and now to use cors within the middleware we use

var server = express()

server.use(cors());

Cross site access in IIS

Note: This post was written a while back but sat in draft. I’ve published this now, but I’m not sure it’s relevant to the latest versions etc. so please bear this in mind.

How do we handle CORS (cross site) access within IIS, i.e. how to we allow/enable it?

We simply need to create a web.config file in the root of our web application, here’s an example

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
  <system.webServer>
    <defaultDocument>
      <files>
        <clear />
        <add value="data.json" />
      </files>
    </defaultDocument>
    <staticContent>
      <mimeMap fileExtension=".json" mimeType="text/json" />
    </staticContent>
    <httpProtocol>
      <customHeaders>
       	<add name="Access-Control-Allow-Origin" value="http://localhost:3000" />
      	<add name="Access-Control-Allow-Methods" value="GET, PUT, POST, DELETE, HEAD, OPTIONS" />
      	<add name="Access-Control-Allow-Credentials" value="true"/>
      	<add name="Access-Control-Allow-Headers" value="X-Requested-With, origin, content-type, accept" />
      </customHeaders>
    </httpProtocol>
  </system.webServer>
</configuration>

Here, within the customHeaders section we explicitly allow origin of localhost (this value could be set to *).

C# 8.0 nullable and non-nullable reference types

Note: This post was written a while back but sat in draft. I’ve published this now, but I’m not sure it’s relevant to the latest versions etc. so please bear this in mind.

One of the key C# 8.0 features is nullable/non-nullable reference types, but before we get started you’ll need to enable the language features by editing the csproj and adding the following to each PropertyGroup

<PropertyGroup>
    <LangVersion>8.0</LangVersion>
    <Nullable>enable</Nullable>
</PropertyGroup>

You can also enable/disable on a per file basis using

#nullable enable

and to disable

#nullable disable

What’s it all about?

So we’ve always been able to assign a null to a reference type (which is also the default value when not initialized), but ofcourse this means if we try to call a method on a null reference we’ll get the good old NullReferenceException. So for example the following (without #nullable enable) will compile quite happily without warnings (although you will see warnings in the Debug output)

string s = null;
var l = s.Length;

Now if we add #nullable enable we’ll get the warnings about that we’re attempting to assign a null to a non-nullable. Just like using the ? as a nullable for primitives, for example int? we now mark our reference types with the ?, hence the code now looks like this

string? s = null;
var l = s.Length;

In other words we’re saying we expect that the string might be a null. The use of the non-nullable on reference types will hopefully highlight possible issues that may result in NullReferenceExceptions, but as they’re currently warnings you’ll probably want to enable Warnings as Errors.

This is an opt-in feature for obvious reasons i.e. it can have a major impact upon existing projects.

Obviously you still need to handle possible null values. Partly because you might be working with libraries which do not have this nullable reference type option enabled, but also because we can “trick” the compiler – so we know the previously listed code will result in a Dereference of a possibly null reference warning, but what if we change things to the following

public static string GetValue(string s)
{
   return s;
}

// original code changed to
string s = GetValue(null);
var l = s.Length;

This still gives us a warning Cannot convert null literal to non-nullable reference type so that’s good but we can change GetValue to this

public static string GetValue([AllowNull] string s)
{
   return s;
}

and now, no warnings exist – the point being that even with nullable reference types enabled and not marking a reference type as nullable, we can still get null reference exceptions.

Attributes

As you’ve seen, there’s also (available in .NET core 3.0) some attributes that we can apply to our code to the compiler a little more information about our null expectations. You’ll need to use the following

using System.Diagnostics.CodeAnalysis;

See Update libraries to use nullable reference types and communicate nullable rules to callers for a full list of attributes etc.

Loading indicator in Ag-Grid

Note: This post was written a while back but sat in draft. I’ve published this now, but I’m not sure it’s relevant to the latest versions etc. so please bear this in mind.

Ag-Grid is a great data grid which supports React (amongst other frameworks). Here’s a couple of simple functions showing how to show and hide the loading indicator using the gridApi

showOverlay = () => {
  if (this.gridApi !== undefined) {
    this.gridApi.showLoadingOverlay();
  }
}

hideOverlay = () => {
  if (this.gridApi !== undefined) {
    this.gridApi.hideOverlay();
  }
}

File explorer extended search in Windows

File Explorer has (as we all know) a search box in the top right, ofcourse we can enter some text and File Explorer will search for filenames with that text and text within files, but there are more options to tailor our search.

See Common File Kinds.

Let’s look at some examples

File name search

If we prefix our search term with file: like this

file: MyFile
file:.sln

Then we only get search results for all files with the text as part of the filename or file extension.

File content search

If we prefix our search term with content: like this

content: SomeContent

Then we only get search results from within files (i.e. text search).

Kind

We can use the kind option along with file type, such as text, music etc. See Common File Kinds.

kind:text

The above list all text files.

Other file options

We can also use the following, other file options, for example finding

datemodified:lastweek
modified:lastweek
modified:05/10/2022
size:>500

The first two options are the same and list files modified last week. The size option lists files with a size larger than 500 bytes.

See also File Kind: Everything.

Boolean Operators

As you can see we can also use Boolean Operators.