Monthly Archives: July 2020

Blazor Components

We can create Blazor components within our Blazor application by simply right mouse clicking on a folder or project in Visual Studio and select Add | Razor Component.

A component derives from Microsoft.AspNetCore.Components.ComponentBase, whether implicitly, such as within a .razor file or explicitly, such as in a C# code file.

Razor component

If we create a .razor file we are already implementing a ComponentBase, but usually without an explicit inherit such as the following

@inherits ComponentBase

With or without this @inherit, we will get a generated file (written to obj\Debug\netstandard2.1\Razor) that is a class which implements the ComponentBase.

We can add code to our .razor file in a @code block, thus combining markup and code within the same file, as per this example

@namespace BlazorApp

<h3>@Text</h3>

@code {
    [Parameter]
    public string Text { get; set; }
}

Separate code and razor files

As alternative to a single .razor file with both markup and code, we can create a .cs file for our code. For example if our .razor file is MyComponent.razor we could create a MyComponent.cs and put our code into a C# class. The class should derive from ComponentBase and also be partial

Hence now we have a MyComponent.razor file that looks like this

@namespace BlazorApp

<h3>@Text</h3>

and a C# file, MyComponent.cs that looks like this

using Microsoft.AspNetCore.Components;

namespace BlazorApp
{
    public partial class MyComponent : ComponentBase
    {
        [Parameter]
        public string Text { get; set; }
    }
}

Ofcourse the use of the partial keyword is the key to this working. If we name the file MyComponent.cs all will be fine, but if we name the file MyComponent.razor.cs the file will appear like a code-behind file in other C# scenarios.

Code only component

We could also write a code only component, so let’s assume we have only a MyComponent.cs file

using Microsoft.AspNetCore.Components;
using Microsoft.AspNetCore.Components.Rendering;

namespace BlazorApp
{
    public class MyComponent : ComponentBase
    {
        protected override void BuildRenderTree(RenderTreeBuilder builder)
        {
            builder.OpenElement(1, "h3");
            builder.AddContent(2, Text);
            builder.CloseElement();
        }

        [Parameter]
        public string Text { get; set; }
    }
}

Obviously we have to put a little bit of effort into constructing the elements in the BuildRenderTree method, but this might suit some scenarios.

Pages and Layouts

Pages and Layouts are just components, just like those discussed above.

The only difference of between a Page and component is that our class is annotated with a PageAttribute for Page. Obviously in a .razor file this happens automatically when we add a page declaration as below

@page "/"

As for the layout, we inherit from LayoutComponentBase, in a .razor file this looks like this

@inherits LayoutComponentBase

GitHub Actions – publishing changes to a branch

If you want to do something like GitHub pages does with Jekyll, i.e. takes master and generates the website and then publishes the resultant files to the gh-pages branch, then you’ll need to set-up personal access tokens and use them in your GitHub action, for example

  • Go to Personal access tokens
  • Click on the “Generate new token” button
  • In the Note field, give it a descriptive name so you know the purpose of the token
  • If you’re wanting to interact with the repo (as we want to for this example) then check the repo checkbox to enable all repo options
  • Click the “Generate token” button
  • Copy the generated token for use in the next section

Once we have a token we’re going to use this in our repositories. So assuming you have a repo created, do the following to store the token

  • Go to your repository and click the “Settings” tab
  • Select the “Secrets<" option
  • Click on the “New secret” button
  • Give the secret a name, for example PUBLISH_TOKEN
  • Paste the token from the previous section in the “Value” textbox
  • Finally click the “Add secret” button

This now stores the token along with the name/key, which can then be used in our GitHub action .yml files, for example here’s a snippet of a GitHub action to publish a website that’s stored in master to the gh-pages branch.

- name: GitHub Pages Publish
  if: ${{ github.ref == 'refs/heads/master' }} 
    uses: peaceiris/actions-gh-pages@v3.6.1
    with:
      github_token: ${{ secrets.PUBLISH_TOKEN }}
      publish_branch: gh-pages
      publish_dir: ./public

In this example action, we check for changes on master, then use GitHub Actions for GitHub Pages to publish to the gh-pages branch from the ./public folder on master. Notice we use the secrets.PUBLISH_TOKEN which means GitHub actions will supply the token from our secrets setting using the name we gave for the secret.

Obviously this example doesn’t build/generate or otherwise do anything with the code on master, it simply takes what’s pushed to master/public and publishes that to the gh-pages branch. Ofcourse if we combine this action with previous build/generate steps as part of a build pipleline.

Modules, modules and yet more modules

Another post that’s sat in draft for a while – native JavaScript module implementations are now supported in all modern browsers, but I think it’s still interesting to look at the variations in module systems.

JavaScript doesn’t (as such) come with a built in module system. Originally it was primarily used for script elements within an HTML document (inline scripting or even scripts per file) not a full blown framework/application as we see nowadays with the likes of React, Angular, Vue etc.

The concept of modules was introduced as a means to enable developers to separate their code into separate files and this ofcourse aids in reuse of code, maintainability etc. This is not all that modules offer. In the early days of JavaScript you could store your scripts in separate files but these would then end up in the global namespace which is not ideal, especially if you start sharing your scripts with others (or using other’s scripts), then there’s the potential of name collisions, i.e. more than one function with the same name in the global namespace.

If you come from a language such as C#, Java or the likes then you may find yourself being slightly surprised that this is a big deal in JavaScript, after all, these languages (and languages older than JavaScript) seemed to have solved this problems already. This is simply the way it was with JavaScript because of it’s original scripting background.

What becomes more confusing is that there isn’t a single solution to modules and how they should work, several groups have created there own versions of modules over the life of JavaScript.

Supported by TypeScript

This post is not a history of JavaScript modules so we’re not going to cover every system every created but instead primarily concentrate on those supported by the TypeScript transpiler, simply because this where my interest in the different modules came from.

TypeScript supports the following module types, “none”, “commonjs”, “amd”, “system”, “umd”, “es2015” and “ESNext”. These are the one’s that you can set in the tsconfig.json’s compilerOptions, module.

To save me looking at writing my own examples of the JavaScript code, we’ll simply let the TypeScript transpiler generate code for us and look at how that looks/works.

Here’s a simple TypeScript (Demo.ts) file with some code (for simplicity we’re not going to worry about lint rules such as one class per file or the likes).

export class MyClass {    
}

export default class MyDefaultClass {    
}

Now my tsconfig.json looks like this

{
  "compilerOptions": {
    "target": "es6",  
    "module": "commonjs",  
    "strict": true
  },
  "include": ["Demo.ts"]
}

And I’ll simply change the “module” to each of the support module kinds and we’ll look at the resultant JavaScript source. Alternatively we can use this index.js

var ts = require('typescript');

let code = 'export class MyClass {' + 
'}' +
'export default class MyDefaultClass {' +
'}';

let result = ts.transpile(code, { module: ts.ModuleKind.CommonJS});
console.log(result);

and change the ModuleKind the review the console output. Either way we should get a chance to look at what style of output we get.

Module CommonJS

Setting tsconfig module to CommonJS results in the following JavaScript being generated. The Object.defineProperty simple assigns the property __esModule to the exports. Other than this the class definitions are very similar to our original code. The main difference is in how the exports are created.

The __esModule is set to true to let “importing” modules know that this code is is a transpiled ES module. It appears this matters specifically with regards to default exports. See module.ts

The exports are made directly to the exports map and available via the name/key. For example exports[‘default’] would supply the MyDefaultClass, like wise exports[‘MyClass’] would return the MyClass type.

"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
class MyClass {
}
exports.MyClass = MyClass;
class MyDefaultClass {
}
exports.default = MyDefaultClass;

We can import the exports using require, for example

var demo = require('./DemoCommon');

var o = new demo.MyClass();

Module AMD

The Asynchronous Module Definition was designed to allow the module to be asynchronously loaded. CommonJS is a synchronous module style and hence when it’s being loaded the application will be blocked/halted. With the AMD module style expects the define function which passes arguments into a callback function.

As you can see the code within the define function is basically our CommonJS file wrapped in the callback.

define(["require", "exports"], function (require, exports) {
    "use strict";
    Object.defineProperty(exports, "__esModule", { value: true });
    class MyClass {
    }
    exports.MyClass = MyClass;
    class MyDefaultClass {
    }
    exports.default = MyDefaultClass;
});

To import/require AMD modules we need to use requirejs, for example assuming we created a file from the above generated code named DemoAmd.js, then we can access the module using

var requirejs = require('requirejs');
requirejs.config({
    baseUrl: __dirname,
    nodeRequire: require
});

var demo = requirejs('./DemoAmd');

var o = new demo.MyClass();

Module System

System.register([], function (exports_1, context_1) {
    "use strict";
    var MyClass, MyDefaultClass;
    var __moduleName = context_1 && context_1.id;
    return {
        setters: [],
        execute: function () {
            MyClass = class MyClass {
            };
            exports_1("MyClass", MyClass);
            MyDefaultClass = class MyDefaultClass {
            };
            exports_1("default", MyDefaultClass);
        }
    };
});

Module UMD

UMD or Universal Module Definition is a module style which can be used in place of CommonJS or AMD in that it uses if/else to generate modules in either CommonJS or AMD style. This means you can write your modules in a way that can be imported into a system expecting either CommonJS or AMD.

(function (factory) {
    if (typeof module === "object" && typeof module.exports === "object") {
        var v = factory(require, exports);
        if (v !== undefined) module.exports = v;
    }
    else if (typeof define === "function" && define.amd) {
        define(["require", "exports"], factory);
    }
})(function (require, exports) {
    "use strict";
    Object.defineProperty(exports, "__esModule", { value: true });
    class MyClass {
    }
    exports.MyClass = MyClass;
    class MyDefaultClass {
    }
    exports.default = MyDefaultClass;
});

We can import UMD modules using either require (as shown in our CommonJS code) or requirejs (as shown in our AMD code) as UMD modules represent modules in both styles.

Module es2015 and ESNext

As you might expect as the TypeScript we’ve written is pretty standard for ES2015 the resultant code looks identical.

export class MyClass {
}
export default class MyDefaultClass {
}

Module None

I’ve left this one until last as it needs further investigation as module None suggests that code is created which is not part of the module system, however the code below works quite happily with require form of import. This is possibly a lack of understanding of it’s use by me. It’s included for completeness.

"use strict";
exports.__esModule = true;
var MyClass = /** @class */ (function () {
    function MyClass() {
    }
    return MyClass;
}());
exports.MyClass = MyClass;
var MyDefaultClass = /** @class */ (function () {
    function MyDefaultClass() {
    }
    return MyDefaultClass;
}());
exports["default"] = MyDefaultClass;

Inspecting modules

function printModule(m, indent = 0) {
    let indents = ' '.repeat(indent);

    console.log(`${indents}Filename: ${m.filename}`);    
    console.log(`${indents}Id: ${m.id}`);    
    let hasParent = m.parent !== undefined && m.parent !== null;
    console.log(`${indents}HasParent: ${hasParent}`);
    console.log(`${indents}Loaded: ${m.loaded}`);
    if(m.export !== []) {
        console.log(`${indents}Exports`);
        for(const e of Object.keys(m.exports)) {
            console.log(`${indents}  ${e}`);
        }
    }
    if(m.paths !== []) {
        console.log(`${indents}Paths`);
        for(const p of m.paths) {
            console.log(`${indents}  ${p}`);
        }
    }

    if(m.children !== []) {
        console.log(`${indents}Children`);
        for (const child of m.children) {
            printModule(child, indent + 3);
        }
    }
}


console.log(Module._nodeModulePaths(Path.dirname('')));
printModule(module);

Echo Bot deconstructed

This post has sat in draft for a year, hopefully it’s still relevent…

I’ve returned to doing some stuff using the Microsoft Bot Framework and updated everything to SDK 4 and some things have changed. I’m not going to waste time look at the changes so much as looking at the EchoBot template which comes with the latest SDK.

Getting started

If you haven’t already got it, you’ll need to install the Bot Builder V4 SDK Template for Visual Studio.

Note: If you instead decide to first go to Azure and create a Bot sample, the code for the EchoBot is (at the time of writing) very different to the template’s code.

Now if you create a new project in Visual Studio using this template you’ll end up a simple Bot that you can immediately run. Next, using the new Bot Framework Emulator (V4) select Open Bot from the emulator’s Welcome screen, then locate the .Bot file that came with the EchoBot and open this (or if you prefer put the Url http://localhost:3978/api/messages into the dialog). Then the Bot will be started and within a short amount of type you should see the following text

conversationUpdate event detected

Note: you will see the text twice, don’t worry about that, it’s normal at this point.

Now typing something into the emulator and press enter and you’ll be presented with something like the following

Turn 1: You sent ‘hi’

As you can see, I typed “Hi” and the Bot echoes it back.

At this point we’ve a working Bot.

Navigating the code

Program.cs

The Program.cs file contains the (as you’d expect) main start-up code for Kestrel to run etc. It configures logging and then bootstraps the Startup class. By default UseApplicationInsights() is commented out but obviously this is very useful once we deploy to Azure, for now leave it commented out. We can also enable logging to the Console etc., see Logging in ASP.NET Core for further information on options here.

Startup.cs

Startup is used to configure services etc. It’s the place we’ll add code for dependency injection etc. The first thing we’ll see within the constructor is the following

_isProduction = env.IsProduction();
var builder = new ConfigurationBuilder()
   .SetBasePath(env.ContentRootPath)
   .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
   .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
   .AddEnvironmentVariables();

   Configuration = builder.Build();

_isProduction is set to development when running on your local machine and to production by default, when run from Azure. The rest of the code creates a combined set of configuration, using the supplied appsettings.json file (part of the template’s project) along with any appsettings.json file for the specified environment. Meaning when running in development we might have differing configuration values from the production environment.

The next interesting method is ConfigureServices, where, as you’ll have guessed, we set-up any dependencies prior to their uses within the Bot code etc. The template code also expects the .bot file to exist. This file acts as further configuration and is not a requirement for a Bot to work, you could easily remove the .bot file and move configuration to the appsettings.json and (as stated previously) use the URL instead within the emulator (instead of the .bot file) and in fact. according to Manage bot resources the Bot file is being deprecated in favour of appsettings.json or .env file. However, for now it’s still part of the template, so I’m not going to cover the Bot file much further here, suffice to say it’s configuration is added to the dependency container in case you need access to it.

Jumping over where we located the endpoint from the .bot file, next up, a data store is created. This one is a MemoryStorage object and it’s primarily for local/debug builds in that it’ll lose its state upon a Bot restart. For production environments where we need to store such state we’ve use something like AzureBlobStorage , CosmoDB or any other cloud based persistence storage.

The ConversationState is then created and will be stored within the previously create IStorage, i.e. the MemoryStorage in this example. Conversation state is basically the state from the conversation taking place between user and Bot. This obviously becomes very useful in less trivial examples where dialogs are involved. In the echo Bot example it tracks the conversation counter state.

Finally within the ConfigureServices method we add our Bot to the container setting up any credentials as well as setting up a catch-all error handler.

The last method within the Startup.cs sets up the ability for the web app to display default and static files as well setting up endpoints etc. for the Bot framework.

EchoBotSampleAccessors.cs

The accessors file will be named after your project name and is basically a wrapper around the conversation state, see Create state property accessors. In the case of the Echo template it wraps the counter state and conversation state.

wwwroot/default.htm

By default, when you run the Bot an HTML page is displayed.

EchoBotBot.cs

Depending on your project name you’ll have a class which implements IBot. The key method of an IBot is the OnTurnAsync method, this is similar to an event look in Windows, it’s basically called upon each turn of a conversation.

Creating a yeoman generator

In the previous post we looked at the basics of getting started with yeoman and the tools etc. around it. Let’s now write some real code. This is going to be a generator for creating a node server using my preferred stack of technologies and allow the person running it to supply data and/or options for the generator to allow it to generate code specific to the user’s needs.

Input

Our node server will offer the option of handling REST, websocket or GraphQL endpoints. So we’re going to need some input from the user to choose the options they want.

First off, DO NOT USE the standard console.log etc. methods for output. Yeoman supplies the log function for this purpose.

Here’s an example of our generator with some basic interaction

var Generator = require("yeoman-generator");
module.exports = class extends Generator {
    async prompting() {
        const input = await this.prompt([
            {
                type: "input",
                name: "name",
                message: "Enter project name",
                default: this.appname
            },
            {
                type: "list",
                name: "endpoint",
                message: "Endpoint type?",
                    choices: ["REST", "REST/websocket", "GraphQL"]
            }
        ]);


        this.log("Project name: ", input.name);
        this.log("Endpoint: ", input.endpoint);
    }
};

Now if we run our generator we’ll be promoted (from the CLI) for a project name and for the selected endpoint type. The results of these prompts will then be output to the output stream.

As you can see from this simple example we can now start to build up a list of options for our generator to use when we generate our code.

Command line arguments

In some cases we might want to allow the user to supply arguments from the command line, i.e. not be prompted for them. To achieve this we add a constructor, like this

constructor(args, opts) {
   super(args, opts);

   this.argument("name", { type: String, required: false });

   this.log(this.options.name);
}

Here’s we’ve declared an argument name which is not required on the command line, this also allows us to run yo server –help to see a list of options available for our generator.

The only problem with the above code is that if the user supplies this argument, they are still prompted for it via the prompting method. To solve this we can add the following

yarn add yeoman-option-or-prompt

Now change our code to require yeoman-option-or-prompt, i.e.

var OptionOrPrompt = require('yeoman-option-or-prompt');

Next change the constructor slightly, to this

constructor(args, opts) {
   super(args, opts);

   this.argument("name", { type: String, required: false });

   this.optionOrPrompt = OptionOrPrompt;
}

and finally let’s change our prompting method to

async prompting() {

   const input = await this.optionOrPrompt([           
      {
         type: "input",
         name: "name",
         message: "Enter project name",
         default: this.appname
      },
      {
         type: "list",
         name: "endpoint",
         message: "Endpoint type?",
            choices: ["REST", "REST/websocket", "GraphQL"]
      }
   ]);

   this.log("Project name: ", input.name);
   this.log("Endpoint: ", input.endpoint);
}

Now when we run yo server without an argument we still get the project name prompt, but when we supply the argument, i.e. yo server MyProject then the project name prompt no longer appears.

Templates

With projects such as the one we’re developing here, it would be a pain if all output had to be written via code. Luckily yeoman includes a template capability from https://ejs.co/.

So in this example add a templates folder to generators/app and then within it add package.json, here’s my file

{
    "name": "<%= name %>",
    "version": "1.0.0",
    "description": "",
    "module": "es6",
    "dependencies": {
    },
    "devDependencies": {
    }
  }

Notice the use of <%= %> to define our template variables. The variable name now needs to be supplied via our generator. We need to make a couple of changes from our original source, the const input needs to change to this.input to allow the input variable to be accessible in another method, the writing method, which looks like this

writing() {
   this.fs.copyTpl(
      this.templatePath('package.json'),
      this.destinationPath('public/package.json'),
         { name: this.input.name } 
   );
}

here’s the changed prompting method as well

async prompting() {

   this.input = await this.optionOrPrompt([           
      {
         type: "input",
         name: "name",
         message: "Enter project name",
         default: this.options.name
      },
      {
         type: "list",
         name: "endpoint",
         message: "Endpoint type?",
            choices: ["REST", "REST/websocket", "GraphQL"]
      }
   ]);
}

Now we can take this further

{
    "name": "<%= name %>",
    "version": "1.0.0",
    "description": "",
    "module": "es6",
    "dependencies": { <% for (let i = 0; i < dependencies.length; i++) {%>
      "<%= dependencies[i].name%>": "<%= dependencies[i].version%>"<% if(i < dependencies.length - 1) {%>,<%}-%>
      <%}%>
    },
    "devDependencies": {
    }
  }

and here’s the changes to the writing function

writing() {

   const dependencies = [
      { name: "express", version: "^4.17.1" },
      { name: "body-parser", version: "^1.19.0" }
   ]

   this.fs.copyTpl(
      this.templatePath('package.json'),
      this.destinationPath('public/package.json'),
      { 
         name: this.input.name, 
         dependencies: dependencies 
      } 
   );
}

The above is quite convoluted, luckily yeoman includes functionality just for such things, using JSON objects

const pkgJson = {
   dependencies: {
      "express": "^4.17.1",
      "body-parser": "^1.19.0"
   }
}

this.fs.extendJSON(
   this.destinationPath('public/package.json'), pkgJson)

Storybook render decorator

Storybook have a bunch of really useful decorators.

This one is useful to see when changes, either data, state or properties cause re-rendering of our React UI. In some cases it may be that a change should not cause a subcomponent to render, with this decorator we can see what caused the render to occur (or at least help point us towards possible reasons).

We need to install the add-on, like this

yarn add -D storybook-addon-react-renders

and we just add code like this

import { withRenders } from "storybook-addon-react-renders";

storiesOf("Some Component Test", module)
  .addDecorator(withRenders)

Creating scaffolding with yeoman

Yeoman is basically a tool for generating scaffolding, i.e. predefined projects (see generators for a list of some existing generators).

You could generate things other than code/projects equally well using yeoman.

There’s a fair few existing generators but you can also define your own generators and hence… Say for example you have a standard set of tools used to create your Node based servers, i.e. you want Typescript, express, eslint, jest etc. we could use yeoman to set everything up.

Ofcourse you could create a shell script for this or a custom CLI, but yeoman gives you the ability to do all this in Javascipt with the power that comes from that language and it’s eco-system.

Within a yeoman script, we can also interact with the user via the console, i.e. ask for input. Create templates for building code, configuration etc.

Installing the tooling

Before we start using yeoman we need to install it, so run either npm or yarn as shown below

npm install -g yo 

To check everything worked simply run

yo --version

At the time of writing, mine version is 3.1.1.

Before we get onto the real topic of this post, lets just check out some yeoman commands

  • Listing installed generators
    yo --generators
    
  • Diagnose yeoman issues
    yo doctor
    

Creating our own generators

Okay so this is what we’re really interested in. I have a bunch of technologies I often use (my usual stack of tech./packages). For example, if I’m creating a Node based server, I’ll tend to use Typescript, express, jest and so on. Whilst we can, ofcourse, create things like a git repos with everything set-up and just clone it or write shell scripts to run our commands. As mentioned, with yeoman we can also template our code as well as interact with the user via the CLI to conditionally generate parts of our application.

There appears to be a generator for producing generators, but this failed to work for me, but for completeness here it is

npm install -g yo generator-generator

Now, let’s write our first generator…

Run the following, to create our package.json file

yarn init -y

The first thing to note is, the generator name should be prefixed with generator-. Therefore we need to change our “name” within package.json, for example

"name": "generator-server"

The layout of our files is expected to be a either (off of our root)

packages.json
generators/app/index.js
generators/router/index.js

OR

packages.json
app/index.js
router/index.js

Whichever layout we choose should be reflected in package.json like this

"files": [
    "generators"
  ],

OR

"files": [
    "app",
    "router"
  ],

You might, at this point, wonder what the point of the router is, and whilst this is within the yeoman getting started guide, it appears ultimately any folder added alongside the app folder will appear as a “subcommand” (if you like) of your generator. In this example, assuming the app name is generator-server (see below) then will will also see that router can be run using yo server:router syntax. Hence you can create multiple commands under your main yeoman application.

We’ll also need to add the yeoman-generator package before we go too much further, so run

yarn add yeoman-generator

So here’s a minimal example of what your package.json might look like

{
  "name": "generator-server",
  "version": "0.1.0",
  "description": "",
  "files": [
    "generators"
  ],
  "keywords": ["yeoman-generator"],
  "dependencies": {
    "yeoman-generator": "^1.0.0"
  }
}

Writing our generator code

In the previous section we got everything in place to allow our generator to be recognised by yeoman, so let’s now write some code.

Here’s an example of a starting point from the yeoman website.

In generator/app/index.js we have a simple example

var Generator = require("yeoman-generator");
module.exports = class extends Generator {
   method1() {
      this.log('method 1 just ran');
   }
   method2() {
      this.log('method 2 just ran');
   }
};

Sadly this is not using ES6 syntax, maybe I’ll look into that in a future post, but for now it’s not too big of a deal. There is a @types/yeoman-generator package if you want to work with Typescript, but I’ll again leave that for another possible post.

When we get to run this generator, you’ll find that both methods are run hence we get the following output

method 1 just ran
method 2 just ran

All the methods we add to the Generator class are public as so are run by yeoman. We can make them private by prefixing with the method name with an underscore (fairly standard Javascript style to suggest a field or method to be private or ignored).

The order that the methods appear is the order they’re executed in, hence switching these two methods around will result in method2 running first, followed by method1.

We’re not going to write any further code at this point, I’ll leave coding the generator for another post.

Testing our generator

At this point we don’t want to deploy our generator remotely, but want to simply test it locally. To do this we run the following command from the root folder of our generator

yarn link

This will create a symbolic link for npm/yarn and now we can run

yo --generators

which should list our new generator, named server.

Now we have our generator available to yeoman, we simply type

yo server

Obviously server is replaced by the name of your generator.

Haskell basics – Functions

Note: I’ve writing these Haskell blog posts whilst learning Haskell, so do not take them as expert guidance. I will amend the post(s) as I learn more.

Functions are created using the format

function-name params = function-definition

Note: function names must be camelCase.

So for example, let’s assume we have a Calculator module with the functions, add, subtract, multiply and divide might look like this

module Modules.Calculator where

add a b = a + b
subtract a b = a - b
multiply a b = a * b
divide a b = a / b

Function can be created without type information (as shown above) however it’s considered good practise to specify type annotations for the functions, so for example let’s annotate the add function to say it takes Integer inputs and returns an Integer result

add :: Int -> Int -> Int
add a b = a + b

Now if we try to use floating point numbers with the add function, we’ll get a compile time error. Obviously its more likely we’d want to handle floating point numbers with this function, so let’s change it to

add :: Double -> Double -> Double

Migrating a folder from one git repo to another

I had a situation where I had a git repo. consisting of a Java project and a C# project (a small monorepo), we decided that permissions for each project needed to differ (i.e. the admin of those projects) and maybe more importantly in a way, changes to one were causing “Pending” changes to the other within CI/CD, in this case TeamCity.

So we need to split the project. Ofcourse it’s easy to create a new project and copy the code, but we wanted to keep the commit history etc.

What I’m going to list below are the steps that worked for me, but I owe a lot to this post Move files from one repository to another, preserving git history.

Use case

To reiterate, we have a Java library and C# library sitting in the same git code base and we want to move the C# library into it’s own repository whilst keeping the commit history.

Steps

  • Clone the repository (i.e. the code we’re wanting to move)
  • CD into it
  • From a command line, run
    git remote rm origin
    

    This will remove the remote url and means we’re not going to accidently commit to the original/source repository.

  • Now we want to filter our anything that’s not part of the code we want to keep. It’s hoped that the C# code, like ours, was in it’s own folder (otherwise things will be much more complicated). So run
    git filter-branch --subdirectory-filter <directory> -- --all
    

    Replace with the relative folder, i.e. subfolder1/subfolder2/FOLDER_TO_KEEP

  • Run the following commands

    git reset --hard
    git gc --aggressive 
    git prune
    git clean -fd
    
  • Now, if you haven’t already create a remote repository, do so and then run
     
    git remote add origin <YOUR REMOTE REPO>
    
  • // this should have been handled by step 6 git remote set-url origin https://youreposerver/yourepo.git

  • git push -u origin --all
    git push origin --tags
    

Getting started with jekyll

GitHub pages, by default, uses jekyll and I wanted to get something running locally to test things.

Getting everything up and running

Let’s start by installed Ruby by going to RubyInstaller for Windows Downloads if you don’t already have Ruby and Gem installed.

Now go through the Jekyll Quick-start Instructions – I’ll list them here also.

  • gem install bundler jekyll
  • jekyll new my-awesome-site
  • cd my-awesome-site
  • bundle exec jekyll serve

So if all went well, the last line of these instructions will run up our jekyll site.

Testing our GitHub pages

  • Clone (if you don’t already have it locally) you repository with you GitHub pages
  • Run git checkout master, i.e. or where you store your markdown/html file content (in other words not the gh-pages branch if you’re using the standard master/gh-pages branches).
  • I don’t have a Gemfile, so in the root folder, create a file name Gemfile and here’s the contents (if you have a Gemfile add these two lines)
    source 'https://rubygems.org'
    gem 'github-pages', group: :jekyll_plugins
    
  • Run bundle install
  • Run bundle exec jekyll serve

Note: You can commit the Gemfile and Gemfile.lock files to your GitHub repository, these are not used by GitHub Pages.

After you’ve run up the server a _site folder will be created, these need not be committed.

Changing the theme

The first thing you might want to try is change the theme to one of the other supported themes. Simply open the _config.yml file and change the name of the theme, i.e.

theme: jekyll-theme-cayman

Other supported themes include

If you change the theme you’ll need to shut the server down and bundle exec jekyll serve which will run jekyll build and update the _site directory.