Monthly Archives: November 2019

JavaScript generator functions

In C# we have iterators which might return data, such as a collection but can also yield return values allowing us to more dynamically return iterable data.

Yield basically means return a value but store the position within the iterator so that when the iterator is called again, execution starts at the next command in the iterators.

These same type of operations are available in JavaScript using generator functions. Let’s look at a simple example

function* generator() {
  yield "A";
  yield "B";
  yield "C";
  yield "D";
}

The function* denotes a generator function and it’s only in the generator function we can use the yield keyword.

When this function is called a Generator object is returned which adheres to the iterable protocol and the iterator protocol which basically means, the Generator acts like an iterator.

Hence executing the following code will result in each yield value being output, i.e. A, B, C, D

for(const element of generator()) {
  console.log(element);
}

TypeScript 3.7.2

TypeScript 3.7.2 has just been released.

First off you’re going to want to install it globally on your machine, hence run (obviously this will install the latest version so not specific to version 3.7.2)

npm install -g typescript

or if you want it locally to your project, ofcourse you can run

npm install -D typescript 

If you’re using VS Code and you want to set it up for this version (if it’s not already set for the latest) then press F1 and select Preferences: Open User Settings adding (or editing) the following (off of the root level of the JSON)

"typescript.tsdk": "node_modules\\typescript\\lib",

I’m not going go through the new features exception to say we’ve now got optional chaining the ?. operator from C# and Nullish coalescing using the ?? operator.

Creating a nuget package (revisited)

I’ve covered some of this post previous in my post Creating Local Packages with NuGet, but I wanted to drill down a little more into this here.

Part of the reason for a revisit is that I wanted to look at creating the relevant .nuspec for a couple of projects I’ve put on github and I wanted to cover the .nuspec files in a little more depth.

Before we start

Before we start you’ll probably want to grab the latest nuget.exe from Available NuGet Distribution Versions. The current recommended version is v4.9.4 and this should be placed in your project folder (or ofcourse wherever you prefer in the path).

Generating a nuspec file

We can now run

nuget spec

in my case, this produced the following

<?xml version="1.0"?>
<package >
  <metadata>
    <id>Package</id>
    <version>1.0.0</version>
    <authors>PutridParrot</authors>
    <owners>PutridParrot</owners>
    <licenseUrl>http://LICENSE_URL_HERE_OR_DELETE_THIS_LINE</licenseUrl>
    <projectUrl>http://PROJECT_URL_HERE_OR_DELETE_THIS_LINE</projectUrl>
    <iconUrl>http://ICON_URL_HERE_OR_DELETE_THIS_LINE</iconUrl>
    <requireLicenseAcceptance>false</requireLicenseAcceptance>
    <description>Package description</description>
    <releaseNotes>Summary of changes made in this release of the package.</releaseNotes>
    <copyright>Copyright 2019</copyright>
    <tags>Tag1 Tag2</tags>
    <dependencies>
      <dependency id="SampleDependency" version="1.0" />
    </dependencies>
  </metadata>
</package>

As you can see, it’s supplied the basics along with an example of a dependency. The dependencies are the package our project is dependent upon. The id can be found via the nuget website or from the nuget packages section within Visual Studio. The versions can also be found in the same way.

If we wish to add files to our nuspec we add a files section after the metadata end tag. For example

  </metadata>

   <files>
      <file src="" target="" />
   </files>

</package>

The src is the relative location of the files to add (maybe we’re adding a README.txt for example). The target is where the file should be copied to. Using content as the start of the target will added file to the content folder.

Naming conventions

The next thing I want to touch on is naming conventions, which I think becomes important if you’re intending the deploy on the NuGet site (as opposed to local to your organization or the likes).

The Names of Assemblies and DLLs discusses some possible conventions. Obviously the <Company>.<Component>.dll is a very sensible convention to adopt as we will need to ensure our assemblies are as uniquely named as possible to stay away from name clashes with others.

Obviously using such naming conventions will tend to push towards namespace etc. naming conventions also, so have a read of Names of Namespaces also.

Generating our NuGet package from a project

Before we look at the nuspec file itself, let’s cover the simple way of generating our NuGet package, as this might be all you need if the component/code you want to package is fairly self-contained.

Before we create the package, let’s create a bare bones .nuspec file because otherwise, the nuget tool will generate one along these lines

<?xml version="1.0"?>
<package xmlns="http://schemas.microsoft.com/packaging/2011/08/nuspec.xsd">
  <metadata>
    <id>PutridParrot.Collections</id>
    <version>1.0.0.0</version>
    <title>PutridParrot.Collections</title>
    <authors>PutridParrot</authors>
    <owners>PutridParrot</owners>
    <requireLicenseAcceptance>false</requireLicenseAcceptance>
    <description>Description</description>
    <copyright>Copyright © PutridParrot 2017</copyright>
    <dependencies />
  </metadata>
</package>

Note: the project I ran this against was PutridParrot.Collections.csproj

So lets take this and change it a little – create a .nuspec named after your project, i.e. mines PutridParrot.Collections.nuspec and paste the below, into it (change names etc. as you need).

<?xml version="1.0"?>
<package >
  <metadata>
    <id>Your Project Name</id>
    <version>1.0.0.0</version>
    <title>Your project title</title>
    <authors>Your Name</authors>
    <owners>Your Name</owners>
    <requireLicenseAcceptance>false</requireLicenseAcceptance>
    <description>Your Description</description>
    <releaseNotes>First Release</releaseNotes>
    <copyright>Copyright 2017</copyright>
    <tags>collections</tags>
  </metadata>
</package>

Note: The tags are used by the NuGet repos, so it’s best to come up with several (with a space between each) tags for your project.

Now we’ve got a nuspec file this will be embedded into the package by nuget.

From the command line, (easiest from the folder containing the project you want to package) run

nuget pack <project-name>.csproj

Note: as the package is zipped, just append .zip to it to open using File Explorer in Windows, so you can see how it’s laid out. Don’t forget to remove the .zip when relocating to local host of remote host.

Autogenerating parts of the nuspec from AssemblyInfo

We can actually tokenize some of our nuspec and have nuget use our project’s AssemblyInfo.cs file, see Replacement tokens.

This means, for example version might be written like this

<version>$version$</version>

and will have this value automatically replaced and the nupkg will be named with that same version.

Multiple files or how to use nuget pack with the csproj

Running nuget pack against a project is useful but what if you want to handle multiple projects and/or non-project files? Then we would be better off editing the nuspec file to pull in the files we want.

Here’s an example of the previous nuspec which now includes more than one version of the project’s DLL.

<?xml version="1.0"?>
<package >
  <metadata>
    <id>Your Project Name</id>
    <version>1.0.0.0</version>
    <title>Your project title</title>
    <authors>Your Name</authors>
    <owners>Your Name</owners>
    <requireLicenseAcceptance>false</requireLicenseAcceptance>
    <description>Your Description</description>
    <releaseNotes>First Release</releaseNotes>
    <copyright>Copyright 2017</copyright>
    <tags>collections</tags>
  </metadata>
  <files>
    <file src="bin\Release\PutridParrot.Collections.dll" target="lib\net40" />
    <file src="bin\Release\PutridParrot.Collections.dll" target="lib\netstandard1.6" />
  </files>  
</package>

Now run

nuget pack PutridParrot.Collections.nuspec

Notice we’re running nuget pack against our nuspec file instead and this will bring in two DLL’s and make them available to lib\net40 and lib\netstandard1.6 thus targetting two different .NET frameworks.

The following gives a list of valid Supported frameworks that we can assign.

Anatomy of an Azure Function

In a previous post we looked at creating Azure Functions, hopefully from this we’re able to quickly and easily get things up and running. We also looked (in another post) and various triggers and hooks which help run our functions.

For this post I want to look into functions a little deeper.

Template code

As we’ve seen, from the Azure Portal or from Visual Studio (with the relevant extensions) we can easily generate our function code. Different function types have slightly different code patterns.

This post will concentrate on C# code, but we can generated code in JavaScript, F# and more.

Azure Portal Generated Code

The Azure Portal generates code for us, which for a C# developer is actually generating C# scripting code. Hence, unlike via Visual Studio, to reference external assemblies which are not included by default. In this case we can use #r (requires) and the assembly name, for example

#r "System.Data"

It’s fairly obvious that an Azure function is literally that, a function/method. Not a class with multiple methods where you create an instance of the class and interact with it. Hence Azure functions are static and may be async, or not, as required.

Depending upon the type of function we’re creating we may return a response or void or another object type. Again, depending on the type of function, we may have a TimerInfo, or an HttpRequestMessage and there may be other arguments, whether they’re parameters for a query of connections to BlobStorage (via a Stream), Queue names or whatever.

With the Azure Portal functions we also have generated for us, the function.json file (and in some cases a README.md file).

function.json

This file acts as a binding (using JSON as the extension suggests) which defines the inputs and outputs as well as some configuration details, such as whether it’s disabled or not.

Visual Studio Generated Code

Code from Visual Studio differs from the Portal code in that we’re creating assemblies here, we’re not uploading scripts to the Azure Portal. Hence we add references in the standard way to the project. We don’t have a function.json file, but instead have a host.json file as well as a local.settings.json file (which we’re look into later).

Our function name (i.e. the name we see in the Portal etc.) is defined using the FunctioNameAttribute on the method. The configuration that exists in the function.json file becomes attributes on the Run method (generated by Visual Studio). So for example, what type of trigger is often defined as an attribute on the first arguments, for example

[FunctionName("TimerTest")]
public static void Run([TimerTrigger("0 */5 * * * *")]TimerInfo myTimer, TraceWriter log)
{
   log.Info($"C# Timer trigger function executed at: {DateTime.Now}");
}

and this attribute then will tend to have the configuration for the trigger, for example this TimerTriggerAttribute has the CRON format string, an HttpTriggerAttribute might have the AuthorizationLevel used, the HTTP methods uses, the Route etc. The BlobTriggerAttribute has connection information etc. Obviously, just like the Azure generated code, other parameters exist depending upon the type of trigger and/or params passed to the method. Also included is the TraceWriter for us to log information to.

For anything other than an HttpTrigger function, you’ll need to supply AzureWebJobStorage details etc. in the local.settings.json file, to do this, from Visual Studio Command Prompt and from your project’s folder (where it’s host.json and local.settings.json files reside) run

func azure functionapp fetch-app-settings (your-app-name)

Replacing your-app-name with the app you will need to have created in the Azure Portal. If you have multiple projects then you have to do this in each one’s project folder. This command will fill in connections strings, account details and encrypted keys etc.

Note: if the func exe is not in your path etc. you can run

npm i -g azure-functions-core-tools

to install the tools.

Running/Debugging locally

Obviously, for as much development as possible, it would be best if, whilst testing, we ran our functions locally. As mentioned, to integrate your Azure Portal app settings we can run

func azure functionapp fetch-app-settings (your-app-name)

and you may have noticed when we run via the debugger the func.exe is executed. So it also makes sense that we can run our binary representations of our functions locally outside of the debugger.

Within your project’s output folder, i.e. bin\Debug\net461 (for example) and obviously after you’ve actually built you project, you can run (via the command prompt or Visual Studio command prompt (where func.exe is in your path) you can run

func host start

This will fire up an Azure emulator, read host.json, generate the host for your functions and then run a service on localhost:7071 (in my case) to host your functions.

Azure function triggers

In a previous introductory post to Azure functions I mentioned that there were more than just the HttpTrigger.

The current list of triggers available (at least for C#/.NET developers) are

  • HttpTrigger – triggers a function based upon an HTTP request, like a REST service call
  • HttpTriggerWithParameters – triggers a function based upon an HTTP request, like a REST service call
  • TimerTrigger – triggers a function based upon a CRON like time interval/setup
  • QueueTrigger – triggers when a message is added to the Azure Queue Storage
  • BlobTrigger – triggers when a blob is added to the specified container
  • EventHubTrigger – triggers when a new event is received via the event hub
  • ServiceBusQueueTrigger – triggers when a message is added to the service bus queue
  • ServiceBusTopicTrigger – triggers when a message is added to the service bus topic
  • ManualTrigger – triggers when the Run button is pressed in the Azure Portal
  • Generic Webhook – triggers when a webhook request occurs
  • GitHub Webhook – triggers when a GitHub webhook request occurs

Many of these are pretty self-explanatory, but let’s have a quick look at some of them anyway.

Note: I’m not going to cover the triggers that require other services, such as queue’s, event hubs, web hooks etc. in this post as they require more than a short paragraph to set-up etc. So I’ll look to dedicate a post to each as and when I begin using them.

HttpTrigger

The HTTP Trigger is analogous to a REST like service call. i.e. when an HTTP request is received the Azure function is called, passing in the HttpRequestMessage object (and a TraceWriter for logging).

Here’s a bare bones function example.

public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log)
{
   log.Info("C# HTTP trigger function processed a request.");
   return req.CreateResponse(HttpStatusCode.OK, "Hello World");
}

The HttpTrigger supports a lot of HTTP methods, including the obvious GET, POST, DELETE along with HEAD, PATCH, PUT, OPTIONS, TRACE.

Note: Everything we can do with HttpTriggerWithParameters, i.e. changing the HTTP method type, adding a route to the function.json and adding parameters to our method to decode the route parameters, can be implemented with the HttpTrigger. At the time of writing this post, I’m not wholly sure of the benefit of the HttpTriggerWithParameters over HttpTrigger.

HttpTriggerWithParameters

An HttpTriggerWithParameters trigger is similar to an HttpTrigger except when we create one via the Azure Portal, by default we get the following code

public static HttpResponseMessage Run(HttpRequestMessage req, string name, TraceWriter log)
{
    log.Info("C# HTTP trigger function processed a request.");
    return req.CreateResponse(HttpStatusCode.OK, "Hello " + name);
}

Notice the return is not wrapped in a Task and includes the string name parameter which, via Azure Portal we need to set the HTTP method to GET. GET is not the only supported method for this function but appears that way by default. To add HTTP methods we need to go to the function’s integrate menu option and add supported HTTP post methods and parameters can be passed in via the Azure portal’s Query parameters. More interestingly though a route is automatically created for us in the associated function.json file. The route looks like this

“route”: “HttpTriggerCSharp/name/{name}”

This obviously means we can navigate to the function using https://(app-name).azurewebsites.net/api/HttpTriggerCSharp/name/{name}

Let’s add an age parameters to the Query using the Add parameter option in the portal. If we change our route to look like this (in the function.json file)

“route”: “HttpTriggerCSharp/name/{name}/{age}”

and the source to look like this

public static HttpResponseMessage Run(HttpRequestMessage req, string name, int? age, TraceWriter log)
{
    log.Info("C# HTTP trigger function processed a request.");

    // Fetching the name from the path parameter in the request URL
    return req.CreateResponse(HttpStatusCode.OK, "Hello " + name + " " + age);
}

we can now navigate to the function using https://(app-name).azurewebsites.net/api/HttpTriggerCSharp/name/{name}/{age}

TimerTrigger

As the name suggests, this trigger is dependent upon a supplied schedule (using CRON format). Obviously this is very useful in situations where we maybe run reports every day at a set time or could be used to periodically check some source (which is not supported as a trigger) such as a file location.

The code generated by the Azure Portal looks like this

public static void Run(TimerInfo myTimer, TraceWriter log)
{
    log.Info($"C# Timer trigger function executed at: {DateTime.Now}");
}

ManualTrigger

The manual trigger is a trigger which is executed via the Azure Portal’s Run button. One might view it a little like running a batch file or simply a function which you do not want others to have access to, the code generated by the portal looks like this

public static void Run(string input, TraceWriter log)
{
    log.Info($"C# manually triggered function called with input: {input}");
}

and values are supplied as input (to the string input via the Request Body input box as plain text.

Python REST service using Flask

Wanting to create a Python based REST service? Let’s use Flask and see what we can do.

Start off by installing Flask

pip install Flask

Now let’s write some code

from flask import Flask
app = Flask(__name__)

@app.route("/")
def hello():
    return "Hello World!"

app.run()

This is nice and simple and pretty obvious how things work. As you can see we create the Flash application and in this sample run it (although on the Flask website they often show running the Flask Run from the Python command line).

Each “route” is defined and mapped directly to a function. In this example the root of the URL is mapped to the hello function.

The route can also contain variables which are show within <>, for example

@app.route('/<name>')
def hello(name):
    return f"Hello {name}"

We can also define converters (similar to declaring the type of the variable) for the variables, so for example if we need to access something by an integer id, the we might have

@app.route('/employee/<int:id>')
def employee(id):
    return f"Employee {id}"

By default the converter used (as you’d probably expect) is string, but Flask also supports int, float, path, any and uuid.

By default each route is using the HTTP method GET, but we can also define the supported methods for each route, i.e.

@app.route('/login', methods=['GET', 'POST'])
def login():
    if request.method == 'POST':
        do_the_login()
    else:
        show_the_login_form()

References

Flask is Fun

C# 8.0 enhancements with pattern matching

C# 8.0 language features include pattern matching similar to that within F#.

Tuples have become a first class citizen of C# 8.0, so we can write something like this

public (string, int) CreateTuple(string s)
{
   return (s, s.Length);
}

This results in a ValueType being created, which can still be accessed using properties Item1 and Item2, however we can also use tuples within pattern matching, for example

var t = CreateTuple("Ten");
switch (t)
{
   case (string s, 1):
      return $"Single Character {s}";
   case (string s, int l):
      return $"{s}:{l}");
}      

In the above we’re matching firstly against a tuple where the first argument is a string and the second an int with value 1, the second case statements is basically going to catch everything else. Obviously the “catch all” must go after the more specific pattern match.

This can be tidied further by using expressions. So the above becomes

return t switch
{
   (string s, 1) => $"Single Character {s}",
   (string s, int l) => $"{s}:{l}"
};

Properties and fields in TypeScript

There’s several ways to handle properties within TypeScript, let’s look at a simple class

class Point {
    x: number;
    y: number;
}

Whilst Visual Code etc. will say these are properties, they’re more like public fields (if you wish to compare to a language such as C#), hence missing the functional side that comes with properties.

By default no accessors means the properties x and y are public. TypeScript allows us to mark them as private in which case we could then write functional getters and setters as per Java, for example

class Point {
    private x: number;
    private y: number;

    getX() {
        return this.x;
    }
    setX(x: number) {
        this.x = x;
    }
    getY() {
        return this.y;
    }
    setY(y: number) {
        this.y = y;
    }
}

TypeScript also supports C# style property getter and setters like this

class Point {
    private x: number;
    private y: number;

    get X() : number {
        return this.x;
    }
    set X(x : number) {
        this.x = x;
    }
    get Y() : number {
        return this.y;
    }
    set Y(y : number) {
        this.y = y;
    }
}

and like C# these get/set methods result in property style syntax, i.e.

var p = new Point();
p.Y = 4;
console.log(p.Y);

Constructors within TypeScript can reduce the ceremony for creating fields and properties by using the private or public keywords for constructor arguments, for example

class Point {
  constructor(public x: number, public y: number) {
  }
}

// becomes

class Point {
    get x() {
        return this.x;
    }
    set x(x) {
        this.x = x;
    }
    get y() {
        return this.y;
    }
    set y(y) {
        this.y = y;
    }
}

Using private constructor arguments is equivalent private fields, so for example

class Point {
  constructor(private x: number, private y: number) {
  }
}

// becomes

class Point {
    constructor(x, y) {
        this.x = x;
        this.y = y;
    }
}

If we need to overwrite the getter/setters functionality, we can write

class Point {
    constructor(public x: number, public y: number) {
    }
   
    public get x() : number {
        return this.x;
    }
    public set x(x : number) {
        this.x = x;
    }   
    public get y() : number {
        return this.x;
    }
    public set y(y : number) {
        this.y = y;
    }
}

Redux and storybook

We’ve implemented our React UI and set-up storybook to test it but we’re using redux as our store, so how do we use this in storybook?

Storybook allow us to create a decorator which is really just a wrapper around our story, so for example we add a decorator using .addDecorator like this

storiesOf("SomeComponent", module)
  .addDecorator(withProvider)
  .add("default", () => 
    <SomeComponent />
 );

Within the .addDecorator we can add more React code or HTML, maybe to position our test component centrally in the screen or in this case we can use the same code to wrap a Provider.

As you can see from the above code we’ve got a withProvider value which looks like this

const withProvider = (story) => <Provider store={store}>{story()}</Provider>

Hence the code takes a story (the story we’re testing) and we simply wrap it within a Provider element having previously created the redux store using the standard createStore function, i.e.

const store = createStore(reducer);

Now we can test our UI/UX code within storybook using the redux store for it’s state.

One log decorator to rule them all

One log to rule them all

In a previous post we looked at implementing log decorators using the experimental features of TypeScript.

Obviously it’d be far nicer if we had a single log decorator that we can apply to each place where a decorator might be used, i.e. a log decorator which can be used instead of logClass, logMethod etc. We can simply create a log function which takes any number of arguments and have this delegate to the specific log function as required (a factory function basically). I’m not going to write the whole thing here, but something along the lines of (the following code) would do the job

function log(...args: any[]): void {
  switch(args.length) {
    case 1: 
      // return logClass
    case 2: 
      // return logProperty     
    case 3: 
      return (typeof args[2] === "number") ? 
        logParameter :
        logMethod
      default:
         throw new Error();    
   }
}