Blazor/ASP.NET core on docker

I wanted to get a Blazor server application up and running on Ubuntu within a docker container (I’m running the whole thing on a Raspberry Pi 4 with Ubuntu Server).

The first stage for this post will simply be about creating a Dockerfile and creating a Blazor server application via the dotnet template.

We’re going to want the latest version of dotnet core, so let’s start by creating a very bare bones Dockerfile which will create an image based upon
mcr.microsoft.com/dotnet/core/sdk:3.1, it will also expose the standard HTTP port used in the Blazor server template, i.e. port 5000

Here’s the Dockerfile

FROM mcr.microsoft.com/dotnet/core/sdk:3.1
ENV APP_HOME /home
RUN mkdir -p $APP_HOME
WORKDIR $APP_HOME
EXPOSE 5000
CMD [ "bash" ]

To build this image run the following

docker rmi dotnet-env --force
docker build -t dotnet-env .

The first line is solely there to remove any existing image (which will be especially useful whilst developing the image). The second line will build the Dockerfile and name it dotnet-env.

Once built, let’s run the image to see all is good with it. So simply run

docker run -it --rm -p 5000:5000 -v /home/share:/home/share dotnet-env

In this example we’ll run docker in interactive mode and map ports using -p to map the host port 5000 to the exposed port in the image. We’ll also also created a volume link from the container to the host.

Once we’ve run the image up we should be placed into a BASH command prompt, now we can simply run

dotnet new blazorserver -o MyServer

To create the project MyServer, once created cd into the MyServer folder. Now run

dotnet run

A kestrel server should start up, and you might be able to access the server using http://server-ip-address. I say might, because you may well see an error at startup, saying something like

Unable to bind to http://localhost:5000 on the IPv6 loopback interface: ‘Cannot assign requested address’.

What you need to do is go into the Properties folder and open launchSettings.json, change the line

"applicationUrl": "https://localhost:5001;http://localhost:5000",

to

"applicationUrl": "http://0.0.0.0:5001;http://0.0.0.0:5000",

Next Step

This obvious next step to our docker build is to create a Docker image which contains our application and runs it when the container is started. We’re going to build and publish the using dotnet publish -c Release -o publish and then include the published files in our docker container, alternatively you might prefer to have the Dockerfile build and publish the project as part of its build process.

For now let’s just build our Blazor server application, then publish it to a folder.

We’re going to host the application in Kestrel, so before we go any further open the appsetting.json file from the publish folder and add the following

"Kestrel": {
  "EndPoints": {
    "Http": {
      "Url": "http://0.0.0.0:5000"
    }   
  }
},

Now we’ll make the changes to the Dockerfile to copy the published folder to the image and start up the Kestrel server when the image is run, here’s the Dockerfile

FROM mcr.microsoft.com/dotnet/core/sdk:3.1

ENV APP_HOME /home
RUN mkdir -p $APP_HOME

WORKDIR $APP_HOME

COPY ./BlazorServer/BlazorServer/publish ${APP_HOME}

EXPOSE 5000

CMD [ "dotnet", "BlazorServer.dll" ]

Now you should be able to access your server using http://your_server_name:5000.

Deploying my library to Github packages using Github actions

In my previous post I explained the steps to use Github actions to package and deploy your .nupkg to NuGet, but Github also includes support for you to deploy your package alongside your project sources.

If you take a look at the right hand of your project within Github (i.e. where you see your main README.md) you’ll notice the Packages section, if you click on the Publish your first package it tells you the steps you need to take to create and deploy your package, but it wasn’t quite that simple for me, hence this post will hopefully help others out a little if they hit similar issues.

The first thing you need to do is make sure you are using a version of the dotnet CLI that supports nuget commands (specifically the add command). Here’s a set-up step with a compatible version of dotnet CLI.

 - name: Setup .NET Core
      uses: actions/setup-dotnet@v1
      with:
        dotnet-version: 3.1.401

The version must be 3.1.401 or above. There may be another before this but I was using 3.1.101 and was getting the following failure in the build output

Specify --help for a list of available options and commands.
error: Unrecognized command or argument 'add'

So the first command listed in Publish your first package will fail if you’re not upto date with the version of dotnet. If you’re using ubuntu instead of Windows for your builds, then you need to include the –store-password-in-clear-text on your dotnet nuget add command. Also change GH_TOKEN to ${{ secrets.GITHUB_TOKEN }}

Hence your first command will look more like this now

dotnet nuget add source https://nuget.pkg.github.com/your_user/index.json -n github -u your_user -p ${{ secrets.GITHUB_TOKEN }} --store-password-in-clear-text

Replace “your_user” with your GitHub username and if you’ve created a permission that you prefer to use in place of GITHUB_TOKEN, then replace that also.

The second line shown in Publish your first package is

dotnet pack --configuration Release

Thankfully this is worked without an issue, however the Step 3 from didn’t work for me. It requires that I set the API key, similar to way we do this with NuGet publishing but using GITHUB_TOKEN again, hence the third step for me to publish to GitHub is

dotnet nuget push your_path/bin/Release/*.nupkg --skip-duplicate --api-key ${{secrets.GITHUB_TOKEN}} --source "github"

Replacing “your_path” with the path of your package to be published. Use –skip-duplicate so that any change to your code will not fail when a build is triggered, as without this option the command tries to publish an existing/unchanged package, causing a failure. Also set the –api-key as shown.

As I am already creating a package for NuGet and want to do the same for GitHub, we can condense the commands to something like

run: |
  dotnet nuget add source https://nuget.pkg.github.com/your_user/index.json -n github -u your_user -p ${{ secrets.GITHUB_TOKEN }} --store-password-in-clear-text
  dotnet nuget push your_path/bin/Release/*.nupkg --skip-duplicate --api-key ${{secrets.GITHUB_TOKEN}} --source "github"

Don’t Forget

One more thing, in your nuspec or via the Properties | Package tab for your project, ensure you add a Repository URL to your source code. The Github package will fail without this.

For example in the SDK style csproj we have

<RepositoryUrl>https://github.com/putridparrot/PutridParrot.Randomizer.git</RepositoryUrl>
<RepositoryType>git</RepositoryType>

The RepositoryType is not required for this to work, just using RepositoryUrl with https://github.com/putridparrot/PutridParrot.Randomizer worked fine.

Within a nuspec the format is as follows (if I recall)

<repository type="git" url="https://github.com/putridparrot/PutridParrot.Randomizer.git" />

Finally you’ll want to add https://nuget.pkg.github.com/your_user/index.json to you nuget.config to access Github’s NuGet package management from a project using this file and you will need a permission set up to access to API to pull in your package, alternatively simply download the package from Github and set-up a local nuget repository.

Deploying my library to NuGet using Github actions

I have a little library that I wanted to package up and make available on nuget.org. It’s nothing special, seriously it’s really not special (source available here PutridParrot.Randomizer).

Note: at the time of writing the library is very much in an alpha state, so don’t worry about the code etc., instead what we’re interested in is the .github/workflows/dotnet-core.yml.

Setting up NuGet

  • If you don’t already have a NuGet account then go to NuGet Register and sign up.
  • Next, click on the top right hand account name drop down. Clicking on this will display a drop down, from here select API Keys.
  • Click the Create option, and give your library a key name and ensure Push is checked and Push new packages and package version is selected, finally in the Glob Pattern just enter an *
  • Finally click the Create button to generate an API key

The API key should be kept secret as it will allow anyone who has access to it to upload package to your account. If for any reason you need to regenerate it, then from the Manage option on the API keys screen, click Regenerate.

Creating our deployment action

I’m using Github actions to build and then deploy my package. As you’ve seen, we need to use an API key to upload our package and obviously we do not want this visible in the build/deployment script, so the first thing we need to do is create a Github secret.

  • Go to the Settings for your project on Github
  • Select the Secrets tab on the left of the screen
  • Click New Secret
  • Give your secret a name, i.e. NUGET_API_KEY
  • In the value, of the secret, place the API key you got from NuGet

This secret is then accessible from your Github action scripts.

Note: You might be thinking, “didn’t this post say the key was to be kept secret”, I’m assuming as Microsoft owns Github and NuGet that we’re fairly safe letting them have access to the key.

Now we need to add a step to package and then a step to deploy to nuget to our Github workflow/actions.

We’re not going to be doing anything clever with our package, so not creating a nuspec for the project or signing it (at this time). So we’ll just use the dotnet CLI pack option. Here’s the addition to these dotnet-core.yml workflow for step

- name: Create Package
  run: dotnet pack --configuration Release
- name: Publish to Nuget
  run: dotnet nuget push /home/runner/work/PutridParrot.Randomizer/PutridParrot.Randomizer/PutridParrot.Randomizer/bin/Release/*.nupkg --skip-duplicate --api-key ${{secrets.NUGET_API_KEY}} --source https://api.nuget.org/v3/index.json

In the above, we have a step “Create Package” to create our package using the dotnet CLI’s pack command using release configuration. If (as I am) you’re using an ubuntu image to run your workflow this will be stored a a folder similar to this one listed below

/home/runner/work/PutridParrot.Randomizer/PutridParrot.Randomizer/PutridParrot.Randomizer/bin/Release/

Where you replace PutridParrot.Randomizer with you project name. Or better still, check the build logs to see where the output should show where the .nupkg was written to.

As you can see from the “Publish to Nuget” step, you’ll also need to reuse the path there where we again use the dotnet cli to push our nupk file to NuGet using the secret we defined earlier as the API key. In this usage we’re also not pushing if the package is a duplicate.

That’s it.

_Note: One side note – if you go onto nuget.org to check your packages, they may by in “Unlisted” state initially. This should change automatically after a few minutes to Published._

Mounting a USB HDD on Ubuntu server

I’m running up my latest Raspberry Pi with a connected USB SSD and forgot how to mount the thing. So this post is a little reminder. Ofcourse the instruction are not exclusive to Raspberry Pi’s or Ubuntu, but hey I just wanted a back story to this post.

So you’ve connected your USB drive and you’ve got a SSH terminal (i.e. via PuTTY) session connected to you Ubuntu server, what’s next?

Where’s my drive

The first thing you need to figure out is where your drive is, i.e. what /dev/ location it’s assigned to.

  • Try running df, which reports disk space and usage, from this you might be able to spot your drive, in my case it’s /dev/sda1 and its obvious by the size of the drive
  • If this is not conclusive then run sudo fdisk -l this may help locate the disk

Mounting the drive

Assuming we’ve located the device, and for this example it is on /dev/sda1, how do we mount this drive?

  • Create a folder that will get mapped to our mounted drive, for example mkdir /media/external or whatever you want to name it
  • To mount the drive we now use
    sudo mount -t ntfs-3g /dev/sda1 /media/external
    

    Obviously this drive I’m mounting is an NTFS formatted drive, you might be using vfat or something else instead, so check. Then we simply have the source (/dev/sda1) mapping to the destination (/media/external) that we created.

    Note: without the mount taking place, ls the destination and you’ll see no files (if they exist on your drive). After you mount the drive. ls the destination to see the files on the USB drive.

As it currently stands, when you reboot you’ll have to mount the drive again.

Auto-mounting the drive

In scenarios where we’re not removing the drive and simply want the OS to automount our drive, we need to do a couple of things.

  • Find the UUID for the drive by running sudo blkid, next to the device, i.e. /dev/sda1 should be a UUID, copy or note down the string.
  • Run sudo nano fstab
  • Add a new line to the bottom of the file along these lines
    UUID=ABCDEFGHIJK /media/external auto nosiud,nodev,nofail 0 0
    

    and save the file

  • Run sudo mount -a to check for errors

Now when you reboot the machine the USB drive should be mounted automatically.

Blazor Components

We can create Blazor components within our Blazor application by simply right mouse clicking on a folder or project in Visual Studio and select Add | Razor Component.

A component derives from Microsoft.AspNetCore.Components.ComponentBase, whether implicitly, such as within a .razor file or explicitly, such as in a C# code file.

Razor component

If we create a .razor file we are already implementing a ComponentBase, but usually without an explicit inherit such as the following

@inherits ComponentBase

With or without this @inherit, we will get a generated file (written to obj\Debug\netstandard2.1\Razor) that is a class which implements the ComponentBase.

We can add code to our .razor file in a @code block, thus combining markup and code within the same file, as per this example

@namespace BlazorApp

<h3>@Text</h3>

@code {
    [Parameter]
    public string Text { get; set; }
}

Separate code and razor files

As alternative to a single .razor file with both markup and code, we can create a .cs file for our code. For example if our .razor file is MyComponent.razor we could create a MyComponent.cs and put our code into a C# class. The class should derive from ComponentBase and also be partial

Hence now we have a MyComponent.razor file that looks like this

@namespace BlazorApp

<h3>@Text</h3>

and a C# file, MyComponent.cs that looks like this

using Microsoft.AspNetCore.Components;

namespace BlazorApp
{
    public partial class MyComponent : ComponentBase
    {
        [Parameter]
        public string Text { get; set; }
    }
}

Ofcourse the use of the partial keyword is the key to this working. If we name the file MyComponent.cs all will be fine, but if we name the file MyComponent.razor.cs the file will appear like a code-behind file in other C# scenarios.

Code only component

We could also write a code only component, so let’s assume we have only a MyComponent.cs file

using Microsoft.AspNetCore.Components;
using Microsoft.AspNetCore.Components.Rendering;

namespace BlazorApp
{
    public class MyComponent : ComponentBase
    {
        protected override void BuildRenderTree(RenderTreeBuilder builder)
        {
            builder.OpenElement(1, "h3");
            builder.AddContent(2, Text);
            builder.CloseElement();
        }

        [Parameter]
        public string Text { get; set; }
    }
}

Obviously we have to put a little bit of effort into constructing the elements in the BuildRenderTree method, but this might suit some scenarios.

Pages and Layouts

Pages and Layouts are just components, just like those discussed above.

The only difference of between a Page and component is that our class is annotated with a PageAttribute for Page. Obviously in a .razor file this happens automatically when we add a page declaration as below

@page "/"

As for the layout, we inherit from LayoutComponentBase, in a .razor file this looks like this

@inherits LayoutComponentBase

GitHub Actions – publishing changes to a branch

If you want to do something like GitHub pages does with Jekyll, i.e. takes master and generates the website and then publishes the resultant files to the gh-pages branch, then you’ll need to set-up personal access tokens and use them in your GitHub action, for example

  • Go to Personal access tokens
  • Click on the “Generate new token” button
  • In the Note field, give it a descriptive name so you know the purpose of the token
  • If you’re wanting to interact with the repo (as we want to for this example) then check the repo checkbox to enable all repo options
  • Click the “Generate token” button
  • Copy the generated token for use in the next section

Once we have a token we’re going to use this in our repositories. So assuming you have a repo created, do the following to store the token

  • Go to your repository and click the “Settings” tab
  • Select the “Secrets<" option
  • Click on the “New secret” button
  • Give the secret a name, for example PUBLISH_TOKEN
  • Paste the token from the previous section in the “Value” textbox
  • Finally click the “Add secret” button

This now stores the token along with the name/key, which can then be used in our GitHub action .yml files, for example here’s a snippet of a GitHub action to publish a website that’s stored in master to the gh-pages branch.

- name: GitHub Pages Publish
  if: ${{ github.ref == 'refs/heads/master' }} 
    uses: peaceiris/actions-gh-pages@v3.6.1
    with:
      github_token: ${{ secrets.PUBLISH_TOKEN }}
      publish_branch: gh-pages
      publish_dir: ./public

In this example action, we check for changes on master, then use GitHub Actions for GitHub Pages to publish to the gh-pages branch from the ./public folder on master. Notice we use the secrets.PUBLISH_TOKEN which means GitHub actions will supply the token from our secrets setting using the name we gave for the secret.

Obviously this example doesn’t build/generate or otherwise do anything with the code on master, it simply takes what’s pushed to master/public and publishes that to the gh-pages branch. Ofcourse if we combine this action with previous build/generate steps as part of a build pipleline.

Modules, modules and yet more modules

Another post that’s sat in draft for a while – native JavaScript module implementations are now supported in all modern browsers, but I think it’s still interesting to look at the variations in module systems.

JavaScript doesn’t (as such) come with a built in module system. Originally it was primarily used for script elements within an HTML document (inline scripting or even scripts per file) not a full blown framework/application as we see nowadays with the likes of React, Angular, Vue etc.

The concept of modules was introduced as a means to enable developers to separate their code into separate files and this ofcourse aids in reuse of code, maintainability etc. This is not all that modules offer. In the early days of JavaScript you could store your scripts in separate files but these would then end up in the global namespace which is not ideal, especially if you start sharing your scripts with others (or using other’s scripts), then there’s the potential of name collisions, i.e. more than one function with the same name in the global namespace.

If you come from a language such as C#, Java or the likes then you may find yourself being slightly surprised that this is a big deal in JavaScript, after all, these languages (and languages older than JavaScript) seemed to have solved this problems already. This is simply the way it was with JavaScript because of it’s original scripting background.

What becomes more confusing is that there isn’t a single solution to modules and how they should work, several groups have created there own versions of modules over the life of JavaScript.

Supported by TypeScript

This post is not a history of JavaScript modules so we’re not going to cover every system every created but instead primarily concentrate on those supported by the TypeScript transpiler, simply because this where my interest in the different modules came from.

TypeScript supports the following module types, “none”, “commonjs”, “amd”, “system”, “umd”, “es2015” and “ESNext”. These are the one’s that you can set in the tsconfig.json’s compilerOptions, module.

To save me looking at writing my own examples of the JavaScript code, we’ll simply let the TypeScript transpiler generate code for us and look at how that looks/works.

Here’s a simple TypeScript (Demo.ts) file with some code (for simplicity we’re not going to worry about lint rules such as one class per file or the likes).

export class MyClass {    
}

export default class MyDefaultClass {    
}

Now my tsconfig.json looks like this

{
  "compilerOptions": {
    "target": "es6",  
    "module": "commonjs",  
    "strict": true
  },
  "include": ["Demo.ts"]
}

And I’ll simply change the “module” to each of the support module kinds and we’ll look at the resultant JavaScript source. Alternatively we can use this index.js

var ts = require('typescript');

let code = 'export class MyClass {' + 
'}' +
'export default class MyDefaultClass {' +
'}';

let result = ts.transpile(code, { module: ts.ModuleKind.CommonJS});
console.log(result);

and change the ModuleKind the review the console output. Either way we should get a chance to look at what style of output we get.

Module CommonJS

Setting tsconfig module to CommonJS results in the following JavaScript being generated. The Object.defineProperty simple assigns the property __esModule to the exports. Other than this the class definitions are very similar to our original code. The main difference is in how the exports are created.

The __esModule is set to true to let “importing” modules know that this code is is a transpiled ES module. It appears this matters specifically with regards to default exports. See module.ts

The exports are made directly to the exports map and available via the name/key. For example exports[‘default’] would supply the MyDefaultClass, like wise exports[‘MyClass’] would return the MyClass type.

"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
class MyClass {
}
exports.MyClass = MyClass;
class MyDefaultClass {
}
exports.default = MyDefaultClass;

We can import the exports using require, for example

var demo = require('./DemoCommon');

var o = new demo.MyClass();

Module AMD

The Asynchronous Module Definition was designed to allow the module to be asynchronously loaded. CommonJS is a synchronous module style and hence when it’s being loaded the application will be blocked/halted. With the AMD module style expects the define function which passes arguments into a callback function.

As you can see the code within the define function is basically our CommonJS file wrapped in the callback.

define(["require", "exports"], function (require, exports) {
    "use strict";
    Object.defineProperty(exports, "__esModule", { value: true });
    class MyClass {
    }
    exports.MyClass = MyClass;
    class MyDefaultClass {
    }
    exports.default = MyDefaultClass;
});

To import/require AMD modules we need to use requirejs, for example assuming we created a file from the above generated code named DemoAmd.js, then we can access the module using

var requirejs = require('requirejs');
requirejs.config({
    baseUrl: __dirname,
    nodeRequire: require
});

var demo = requirejs('./DemoAmd');

var o = new demo.MyClass();

Module System

System.register([], function (exports_1, context_1) {
    "use strict";
    var MyClass, MyDefaultClass;
    var __moduleName = context_1 && context_1.id;
    return {
        setters: [],
        execute: function () {
            MyClass = class MyClass {
            };
            exports_1("MyClass", MyClass);
            MyDefaultClass = class MyDefaultClass {
            };
            exports_1("default", MyDefaultClass);
        }
    };
});

Module UMD

UMD or Universal Module Definition is a module style which can be used in place of CommonJS or AMD in that it uses if/else to generate modules in either CommonJS or AMD style. This means you can write your modules in a way that can be imported into a system expecting either CommonJS or AMD.

(function (factory) {
    if (typeof module === "object" && typeof module.exports === "object") {
        var v = factory(require, exports);
        if (v !== undefined) module.exports = v;
    }
    else if (typeof define === "function" && define.amd) {
        define(["require", "exports"], factory);
    }
})(function (require, exports) {
    "use strict";
    Object.defineProperty(exports, "__esModule", { value: true });
    class MyClass {
    }
    exports.MyClass = MyClass;
    class MyDefaultClass {
    }
    exports.default = MyDefaultClass;
});

We can import UMD modules using either require (as shown in our CommonJS code) or requirejs (as shown in our AMD code) as UMD modules represent modules in both styles.

Module es2015 and ESNext

As you might expect as the TypeScript we’ve written is pretty standard for ES2015 the resultant code looks identical.

export class MyClass {
}
export default class MyDefaultClass {
}

Module None

I’ve left this one until last as it needs further investigation as module None suggests that code is created which is not part of the module system, however the code below works quite happily with require form of import. This is possibly a lack of understanding of it’s use by me. It’s included for completeness.

"use strict";
exports.__esModule = true;
var MyClass = /** @class */ (function () {
    function MyClass() {
    }
    return MyClass;
}());
exports.MyClass = MyClass;
var MyDefaultClass = /** @class */ (function () {
    function MyDefaultClass() {
    }
    return MyDefaultClass;
}());
exports["default"] = MyDefaultClass;

Inspecting modules

function printModule(m, indent = 0) {
    let indents = ' '.repeat(indent);

    console.log(`${indents}Filename: ${m.filename}`);    
    console.log(`${indents}Id: ${m.id}`);    
    let hasParent = m.parent !== undefined && m.parent !== null;
    console.log(`${indents}HasParent: ${hasParent}`);
    console.log(`${indents}Loaded: ${m.loaded}`);
    if(m.export !== []) {
        console.log(`${indents}Exports`);
        for(const e of Object.keys(m.exports)) {
            console.log(`${indents}  ${e}`);
        }
    }
    if(m.paths !== []) {
        console.log(`${indents}Paths`);
        for(const p of m.paths) {
            console.log(`${indents}  ${p}`);
        }
    }

    if(m.children !== []) {
        console.log(`${indents}Children`);
        for (const child of m.children) {
            printModule(child, indent + 3);
        }
    }
}


console.log(Module._nodeModulePaths(Path.dirname('')));
printModule(module);

Echo Bot deconstructed

This post has sat in draft for a year, hopefully it’s still relevent…

I’ve returned to doing some stuff using the Microsoft Bot Framework and updated everything to SDK 4 and some things have changed. I’m not going to waste time look at the changes so much as looking at the EchoBot template which comes with the latest SDK.

Getting started

If you haven’t already got it, you’ll need to install the Bot Builder V4 SDK Template for Visual Studio.

Note: If you instead decide to first go to Azure and create a Bot sample, the code for the EchoBot is (at the time of writing) very different to the template’s code.

Now if you create a new project in Visual Studio using this template you’ll end up a simple Bot that you can immediately run. Next, using the new Bot Framework Emulator (V4) select Open Bot from the emulator’s Welcome screen, then locate the .Bot file that came with the EchoBot and open this (or if you prefer put the Url http://localhost:3978/api/messages into the dialog). Then the Bot will be started and within a short amount of type you should see the following text

conversationUpdate event detected

Note: you will see the text twice, don’t worry about that, it’s normal at this point.

Now typing something into the emulator and press enter and you’ll be presented with something like the following

Turn 1: You sent ‘hi’

As you can see, I typed “Hi” and the Bot echoes it back.

At this point we’ve a working Bot.

Navigating the code

Program.cs

The Program.cs file contains the (as you’d expect) main start-up code for Kestrel to run etc. It configures logging and then bootstraps the Startup class. By default UseApplicationInsights() is commented out but obviously this is very useful once we deploy to Azure, for now leave it commented out. We can also enable logging to the Console etc., see Logging in ASP.NET Core for further information on options here.

Startup.cs

Startup is used to configure services etc. It’s the place we’ll add code for dependency injection etc. The first thing we’ll see within the constructor is the following

_isProduction = env.IsProduction();
var builder = new ConfigurationBuilder()
   .SetBasePath(env.ContentRootPath)
   .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
   .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
   .AddEnvironmentVariables();

   Configuration = builder.Build();

_isProduction is set to development when running on your local machine and to production by default, when run from Azure. The rest of the code creates a combined set of configuration, using the supplied appsettings.json file (part of the template’s project) along with any appsettings.json file for the specified environment. Meaning when running in development we might have differing configuration values from the production environment.

The next interesting method is ConfigureServices, where, as you’ll have guessed, we set-up any dependencies prior to their uses within the Bot code etc. The template code also expects the .bot file to exist. This file acts as further configuration and is not a requirement for a Bot to work, you could easily remove the .bot file and move configuration to the appsettings.json and (as stated previously) use the URL instead within the emulator (instead of the .bot file) and in fact. according to Manage bot resources the Bot file is being deprecated in favour of appsettings.json or .env file. However, for now it’s still part of the template, so I’m not going to cover the Bot file much further here, suffice to say it’s configuration is added to the dependency container in case you need access to it.

Jumping over where we located the endpoint from the .bot file, next up, a data store is created. This one is a MemoryStorage object and it’s primarily for local/debug builds in that it’ll lose its state upon a Bot restart. For production environments where we need to store such state we’ve use something like AzureBlobStorage , CosmoDB or any other cloud based persistence storage.

The ConversationState is then created and will be stored within the previously create IStorage, i.e. the MemoryStorage in this example. Conversation state is basically the state from the conversation taking place between user and Bot. This obviously becomes very useful in less trivial examples where dialogs are involved. In the echo Bot example it tracks the conversation counter state.

Finally within the ConfigureServices method we add our Bot to the container setting up any credentials as well as setting up a catch-all error handler.

The last method within the Startup.cs sets up the ability for the web app to display default and static files as well setting up endpoints etc. for the Bot framework.

EchoBotSampleAccessors.cs

The accessors file will be named after your project name and is basically a wrapper around the conversation state, see Create state property accessors. In the case of the Echo template it wraps the counter state and conversation state.

wwwroot/default.htm

By default, when you run the Bot an HTML page is displayed.

EchoBotBot.cs

Depending on your project name you’ll have a class which implements IBot. The key method of an IBot is the OnTurnAsync method, this is similar to an event look in Windows, it’s basically called upon each turn of a conversation.

Creating a yeoman generator

In the previous post we looked at the basics of getting started with yeoman and the tools etc. around it. Let’s now write some real code. This is going to be a generator for creating a node server using my preferred stack of technologies and allow the person running it to supply data and/or options for the generator to allow it to generate code specific to the user’s needs.

Input

Our node server will offer the option of handling REST, websocket or GraphQL endpoints. So we’re going to need some input from the user to choose the options they want.

First off, DO NOT USE the standard console.log etc. methods for output. Yeoman supplies the log function for this purpose.

Here’s an example of our generator with some basic interaction

var Generator = require("yeoman-generator");
module.exports = class extends Generator {
    async prompting() {
        const input = await this.prompt([
            {
                type: "input",
                name: "name",
                message: "Enter project name",
                default: this.appname
            },
            {
                type: "list",
                name: "endpoint",
                message: "Endpoint type?",
                    choices: ["REST", "REST/websocket", "GraphQL"]
            }
        ]);


        this.log("Project name: ", input.name);
        this.log("Endpoint: ", input.endpoint);
    }
};

Now if we run our generator we’ll be promoted (from the CLI) for a project name and for the selected endpoint type. The results of these prompts will then be output to the output stream.

As you can see from this simple example we can now start to build up a list of options for our generator to use when we generate our code.

Command line arguments

In some cases we might want to allow the user to supply arguments from the command line, i.e. not be prompted for them. To achieve this we add a constructor, like this

constructor(args, opts) {
   super(args, opts);

   this.argument("name", { type: String, required: false });

   this.log(this.options.name);
}

Here’s we’ve declared an argument name which is not required on the command line, this also allows us to run yo server –help to see a list of options available for our generator.

The only problem with the above code is that if the user supplies this argument, they are still prompted for it via the prompting method. To solve this we can add the following

yarn add yeoman-option-or-prompt

Now change our code to require yeoman-option-or-prompt, i.e.

var OptionOrPrompt = require('yeoman-option-or-prompt');

Next change the constructor slightly, to this

constructor(args, opts) {
   super(args, opts);

   this.argument("name", { type: String, required: false });

   this.optionOrPrompt = OptionOrPrompt;
}

and finally let’s change our prompting method to

async prompting() {

   const input = await this.optionOrPrompt([           
      {
         type: "input",
         name: "name",
         message: "Enter project name",
         default: this.appname
      },
      {
         type: "list",
         name: "endpoint",
         message: "Endpoint type?",
            choices: ["REST", "REST/websocket", "GraphQL"]
      }
   ]);

   this.log("Project name: ", input.name);
   this.log("Endpoint: ", input.endpoint);
}

Now when we run yo server without an argument we still get the project name prompt, but when we supply the argument, i.e. yo server MyProject then the project name prompt no longer appears.

Templates

With projects such as the one we’re developing here, it would be a pain if all output had to be written via code. Luckily yeoman includes a template capability from https://ejs.co/.

So in this example add a templates folder to generators/app and then within it add package.json, here’s my file

{
    "name": "<%= name %>",
    "version": "1.0.0",
    "description": "",
    "module": "es6",
    "dependencies": {
    },
    "devDependencies": {
    }
  }

Notice the use of <%= %> to define our template variables. The variable name now needs to be supplied via our generator. We need to make a couple of changes from our original source, the const input needs to change to this.input to allow the input variable to be accessible in another method, the writing method, which looks like this

writing() {
   this.fs.copyTpl(
      this.templatePath('package.json'),
      this.destinationPath('public/package.json'),
         { name: this.input.name } 
   );
}

here’s the changed prompting method as well

async prompting() {

   this.input = await this.optionOrPrompt([           
      {
         type: "input",
         name: "name",
         message: "Enter project name",
         default: this.options.name
      },
      {
         type: "list",
         name: "endpoint",
         message: "Endpoint type?",
            choices: ["REST", "REST/websocket", "GraphQL"]
      }
   ]);
}

Now we can take this further

{
    "name": "<%= name %>",
    "version": "1.0.0",
    "description": "",
    "module": "es6",
    "dependencies": { <% for (let i = 0; i < dependencies.length; i++) {%>
      "<%= dependencies[i].name%>": "<%= dependencies[i].version%>"<% if(i < dependencies.length - 1) {%>,<%}-%>
      <%}%>
    },
    "devDependencies": {
    }
  }

and here’s the changes to the writing function

writing() {

   const dependencies = [
      { name: "express", version: "^4.17.1" },
      { name: "body-parser", version: "^1.19.0" }
   ]

   this.fs.copyTpl(
      this.templatePath('package.json'),
      this.destinationPath('public/package.json'),
      { 
         name: this.input.name, 
         dependencies: dependencies 
      } 
   );
}

The above is quite convoluted, luckily yeoman includes functionality just for such things, using JSON objects

const pkgJson = {
   dependencies: {
      "express": "^4.17.1",
      "body-parser": "^1.19.0"
   }
}

this.fs.extendJSON(
   this.destinationPath('public/package.json'), pkgJson)

Storybook render decorator

Storybook have a bunch of really useful decorators.

This one is useful to see when changes, either data, state or properties cause re-rendering of our React UI. In some cases it may be that a change should not cause a subcomponent to render, with this decorator we can see what caused the render to occur (or at least help point us towards possible reasons).

We need to install the add-on, like this

yarn add -D storybook-addon-react-renders

and we just add code like this

import { withRenders } from "storybook-addon-react-renders";

storiesOf("Some Component Test", module)
  .addDecorator(withRenders)