Creating scaffolding with yeoman

Yeoman is basically a tool for generating scaffolding, i.e. predefined projects (see generators for a list of some existing generators).

You could generate things other than code/projects equally well using yeoman.

There’s a fair few existing generators but you can also define your own generators and hence… Say for example you have a standard set of tools used to create your Node based servers, i.e. you want Typescript, express, eslint, jest etc. we could use yeoman to set everything up.

Ofcourse you could create a shell script for this or a custom CLI, but yeoman gives you the ability to do all this in Javascipt with the power that comes from that language and it’s eco-system.

Within a yeoman script, we can also interact with the user via the console, i.e. ask for input. Create templates for building code, configuration etc.

Installing the tooling

Before we start using yeoman we need to install it, so run either npm or yarn as shown below

npm install -g yo 

To check everything worked simply run

yo --version

At the time of writing, mine version is 3.1.1.

Before we get onto the real topic of this post, lets just check out some yeoman commands

  • Listing installed generators
    yo --generators
    
  • Diagnose yeoman issues
    yo doctor
    

Creating our own generators

Okay so this is what we’re really interested in. I have a bunch of technologies I often use (my usual stack of tech./packages). For example, if I’m creating a Node based server, I’ll tend to use Typescript, express, jest and so on. Whilst we can, ofcourse, create things like a git repos with everything set-up and just clone it or write shell scripts to run our commands. As mentioned, with yeoman we can also template our code as well as interact with the user via the CLI to conditionally generate parts of our application.

There appears to be a generator for producing generators, but this failed to work for me, but for completeness here it is

npm install -g yo generator-generator

Now, let’s write our first generator…

Run the following, to create our package.json file

yarn init -y

The first thing to note is, the generator name should be prefixed with generator-. Therefore we need to change our “name” within package.json, for example

"name": "generator-server"

The layout of our files is expected to be a either (off of our root)

packages.json
generators/app/index.js
generators/router/index.js

OR

packages.json
app/index.js
router/index.js

Whichever layout we choose should be reflected in package.json like this

"files": [
    "generators"
  ],

OR

"files": [
    "app",
    "router"
  ],

You might, at this point, wonder what the point of the router is, and whilst this is within the yeoman getting started guide, it appears ultimately any folder added alongside the app folder will appear as a “subcommand” (if you like) of your generator. In this example, assuming the app name is generator-server (see below) then will will also see that router can be run using yo server:router syntax. Hence you can create multiple commands under your main yeoman application.

We’ll also need to add the yeoman-generator package before we go too much further, so run

yarn add yeoman-generator

So here’s a minimal example of what your package.json might look like

{
  "name": "generator-server",
  "version": "0.1.0",
  "description": "",
  "files": [
    "generators"
  ],
  "keywords": ["yeoman-generator"],
  "dependencies": {
    "yeoman-generator": "^1.0.0"
  }
}

Writing our generator code

In the previous section we got everything in place to allow our generator to be recognised by yeoman, so let’s now write some code.

Here’s an example of a starting point from the yeoman website.

In generator/app/index.js we have a simple example

var Generator = require("yeoman-generator");
module.exports = class extends Generator {
   method1() {
      this.log('method 1 just ran');
   }
   method2() {
      this.log('method 2 just ran');
   }
};

Sadly this is not using ES6 syntax, maybe I’ll look into that in a future post, but for now it’s not too big of a deal. There is a @types/yeoman-generator package if you want to work with Typescript, but I’ll again leave that for another possible post.

When we get to run this generator, you’ll find that both methods are run hence we get the following output

method 1 just ran
method 2 just ran

All the methods we add to the Generator class are public as so are run by yeoman. We can make them private by prefixing with the method name with an underscore (fairly standard Javascript style to suggest a field or method to be private or ignored).

The order that the methods appear is the order they’re executed in, hence switching these two methods around will result in method2 running first, followed by method1.

We’re not going to write any further code at this point, I’ll leave coding the generator for another post.

Testing our generator

At this point we don’t want to deploy our generator remotely, but want to simply test it locally. To do this we run the following command from the root folder of our generator

yarn link

This will create a symbolic link for npm/yarn and now we can run

yo --generators

which should list our new generator, named server.

Now we have our generator available to yeoman, we simply type

yo server

Obviously server is replaced by the name of your generator.

Haskell basics – Functions

Note: I’ve writing these Haskell blog posts whilst learning Haskell, so do not take them as expert guidance. I will amend the post(s) as I learn more.

Functions are created using the format

function-name params = function-definition

Note: function names must be camelCase.

So for example, let’s assume we have a Calculator module with the functions, add, subtract, multiply and divide might look like this

module Modules.Calculator where

add a b = a + b
subtract a b = a - b
multiply a b = a * b
divide a b = a / b

Function can be created without type information (as shown above) however it’s considered good practise to specify type annotations for the functions, so for example let’s annotate the add function to say it takes Integer inputs and returns an Integer result

add :: Int -> Int -> Int
add a b = a + b

Now if we try to use floating point numbers with the add function, we’ll get a compile time error. Obviously its more likely we’d want to handle floating point numbers with this function, so let’s change it to

add :: Double -> Double -> Double

Migrating a folder from one git repo to another

I had a situation where I had a git repo. consisting of a Java project and a C# project (a small monorepo), we decided that permissions for each project needed to differ (i.e. the admin of those projects) and maybe more importantly in a way, changes to one were causing “Pending” changes to the other within CI/CD, in this case TeamCity.

So we need to split the project. Ofcourse it’s easy to create a new project and copy the code, but we wanted to keep the commit history etc.

What I’m going to list below are the steps that worked for me, but I owe a lot to this post Move files from one repository to another, preserving git history.

Use case

To reiterate, we have a Java library and C# library sitting in the same git code base and we want to move the C# library into it’s own repository whilst keeping the commit history.

Steps

  • Clone the repository (i.e. the code we’re wanting to move)
  • CD into it
  • From a command line, run
    git remote rm origin
    

    This will remove the remote url and means we’re not going to accidently commit to the original/source repository.

  • Now we want to filter our anything that’s not part of the code we want to keep. It’s hoped that the C# code, like ours, was in it’s own folder (otherwise things will be much more complicated). So run
    git filter-branch --subdirectory-filter <directory> -- --all
    

    Replace with the relative folder, i.e. subfolder1/subfolder2/FOLDER_TO_KEEP

  • Run the following commands

    git reset --hard
    git gc --aggressive 
    git prune
    git clean -fd
    
  • Now, if you haven’t already create a remote repository, do so and then run
     
    git remote add origin <YOUR REMOTE REPO>
    
  • // this should have been handled by step 6 git remote set-url origin https://youreposerver/yourepo.git

  • git push -u origin --all
    git push origin --tags
    

Getting started with jekyll

GitHub pages, by default, uses jekyll and I wanted to get something running locally to test things.

Getting everything up and running

Let’s start by installed Ruby by going to RubyInstaller for Windows Downloads if you don’t already have Ruby and Gem installed.

Now go through the Jekyll Quick-start Instructions – I’ll list them here also.

  • gem install bundler jekyll
  • jekyll new my-awesome-site
  • cd my-awesome-site
  • bundle exec jekyll serve

So if all went well, the last line of these instructions will run up our jekyll site.

Testing our GitHub pages

  • Clone (if you don’t already have it locally) you repository with you GitHub pages
  • Run git checkout master, i.e. or where you store your markdown/html file content (in other words not the gh-pages branch if you’re using the standard master/gh-pages branches).
  • I don’t have a Gemfile, so in the root folder, create a file name Gemfile and here’s the contents (if you have a Gemfile add these two lines)
    source 'https://rubygems.org'
    gem 'github-pages', group: :jekyll_plugins
    
  • Run bundle install
  • Run bundle exec jekyll serve

Note: You can commit the Gemfile and Gemfile.lock files to your GitHub repository, these are not used by GitHub Pages.

After you’ve run up the server a _site folder will be created, these need not be committed.

Changing the theme

The first thing you might want to try is change the theme to one of the other supported themes. Simply open the _config.yml file and change the name of the theme, i.e.

theme: jekyll-theme-cayman

Other supported themes include

If you change the theme you’ll need to shut the server down and bundle exec jekyll serve which will run jekyll build and update the _site directory.

Dependency Injection and Blazor

In a previous post Blazor routing and Navigation we injected the NavigationManager into out page using the following

@inject NavigationManager NavManager

So when we use @inject followed by the type we want injected, ASP.NET/Blazor will automatically supply the NavigationManager (assuming one exists).

Adding services

Ofcourse we can also add our own types/services to the DI container.

On a Blazor WebAssembly application, we can add types to the Program.cs, Main method, for example

public static async Task Main(string[] args)
{
   var builder = WebAssemblyHostBuilder.CreateDefault(args);
   // template generated code here

   // my custom DataService
   builder.Services.AddSingleton<IDataService, DataService>();

   await builder.Build().RunAsync();
}

In Blazor Server, we add our types to the Startup.cs, ConfigureServices method, for example

public void ConfigureServices(IServiceCollection services)
{
   // template generated code here

   // my custom DataService
   services.AddSingleton<IDataService, DataService>();
}

Service lifetime

In the examples in the previous section we added the service as a singleton.

  • Scoped – this is means the service is scoped to the connection. This is the preferred way to handle per user services – there is no concept of scope services in WebAssembly as obviously it’s a client technology at this point and already per user scoped
  • Singleton – As you’d expect, this is a single instance for the lifetime of the application
  • Transient – each request for a transient service will receive a new instance of the service
  • If you need access to service is a Component class, i.e. you’re creating your own IComponent you have mark a property with the InjectAttribute

    public class MyService
    {
       [Inject]
       IDataService DataService { get; set; }
    }
    

    Ofcourse constructor injection (my preferred way to do things) is also available, so we just write code such as this, assuming that MyService is created using the service container

    public class MyService
    {
    public MyService(IDataService dataService)
    {
    // do something with dataService
    }
    }

    Destructing in JavaScript

    In JavaScript/TypeScript, if you’re using the Prefer destructuring from arrays and objects (prefer-destructuring) eslint rule, you’ll want to use destructing syntax to get values from objects and arrays.

    If we imagine we have an object like this

    class Person
    {
       firstName: string;
       lastName: string;
    }
    

    The to get the firstName from a Person instance, we tend to use

    const firstName = person.firstName;
    

    Instead this rule prefers that we use the following syntax

    const { firstName } = person;
    

    If for some reason (for example in React if you’re destructing state which may have been changed) you have need to get the value using destructing syntax but assigned to a new variable/value name, then we use

    const { firstName: fname } = person;
    

    Configuration and Blazor

    In a previous post we looked at dependency injection within Blazor. One of the services available by default is an implementation of IConfiguration, hence to inject the configuration object we do the following

    @inject IConfiguration Configuration
    

    and we can interact with the configuration using a key, i.e.

    @Configuration["SomeKey"]
    

    For Blazor on the server we can simply add the key/value to the appsettings.json file, like this

    "SomeKey":  "Hello World" 
    

    By default Blazor WebAssembly does not come with an appsettings.json, so you’ll need to add one yourself. This file will need to be in the wwwroot folder as these files will be deployed at part of your client.

    You can put sensitive data in the server appsettings.json because it’s hosted on the server, do not put sensitive information in the appsettings.json for a WebAssembly/client otherwise this will be downloaded to the user’s machine.

    If you want to store more complicated data in the appsettings.json, for example a JSON object, you’ll need to create a class that looks like the data, for example if your appsettings.json had the following

    "Person": {
      "FirstName": "Scooby",
      "LastName": "Doo"
    } 
    

    So now we’ll need a class to match the structure of the data. The name of the class is not important, the structure is, so here we have

    public class MyPerson
    {
       public string FirstName { get; set; }
       public string LastName { get; set; }
    }
    

    Now to access this we need to use

    Configuration.GetSection("Person").Get<MyPerson>();
    

    References

    Carl Franklin’s Blazor Train: Configuration and Dependency Injection

    Adding images to your github README…

    Obviously there are several ways to add images to your GitHub README.md or other mark down files in GitHub.

    1. Check them into the repository that includes the README.md
    2. Use a CDN or other file storage outside of GitHub
    3. Attach the image to an issue within your repository

    If you are going to use the first option, then there’s an obvious downside, in that the image(s) are now part of your repo. and will add to the overall download, clone, fork etc.

    Obviously the second option is great if you have CDN access of other filer space.

    The third option is presented on GitHub Tricks: Upload Images & Live Demos and simply requires

    1. Create an issue in your repo
    2. Drag and drop the image from File Explorer (or the likes) into the comment section of the issue
    3. When the link is updated in the comment, i.e. you see the image, copy the link using your browser (i.e. right mouse click, copy link)
    4. Now use this link in your markdown

    To use the link in GitHub markdown just use

    ![You Text](Image Url)
    

    Generating IL using C#

    Note: This is an old post I had sitting around for a couple of years, I’m not sure how complete or useful it is, but better being published than hidden away and it might be of use at some point.

    There are different ways to dynamically generate code for .NET, using tools such as T4, custom code generators run via target builds etc. Then there’s creating your assembly, modules, types etc. via IL. I don’t mean literally write IL files but instead generating your IL via C# using the ILGenerator class and Emit methods.

    I wanted to write a factory class that worked a little like Refit in that you define the interface and Refit “magically” creates an implementation to the interface and calls boilerplate code to inject and/or do the work required to make the code work.

    Refit actually uses build targets and code generation via GenerateStubsTask and InterfaceStubGenerator not IL generation.

    IL is not really a simple way to achieve these things (source generators, currently in previous, would be far preferable) but maybe in some situations IL generation suits your requirements and I thought it’d be an interesting thing to try anyway.

    Use case

    What I want to do is allow the developer to create an interface which contains methods (we’re only going to support “standard” methods at this point). The methods may take multiple arguments/parameters and must return Task (for void) or Task of T (for return values). Just like Refit, the idea would be that the developer marks methods in the interface with attributes which then tell the factory class what code to generate for the implementation.

    All very much along the lines of Refit.

    Starting things off by creating our Assembly

    We’re going to need to create an Assembly, at runtime, to host our new types, so the first thing we do is, using the domain of the current thread we’ll use the method DefineDynamicAssembly, pass in both an AssemblyName and AssemblyBuilderAccess parameter which creates an AssemblyBuilder. This becomes the starting point for the rest of our builders and eventually our IL code.

    Note: If you want to save the assembly to disk, which is very useful for debugging by inspecting the generated code using ILSpy or the likes, then you should set the AssemblyBuilderAccess to AssemblyBuilderAccess.RunAndSave and supply the file path (not the filename) as the fourth argument to DefineDynamicAssembly.

    Before we get into this code further, let’s look at a simple interface which will be our starting point.

    public interface IService
    {
       Task<string> GetName();
    }
    

    Whilst the aim, eventually, is to include attributes on our interface and return different generic types, for this post we’ll not get into this, but instead simply generate an implementation which ignores the arguments passed and expects either a return of Task or Task<string>.

    Let’s create our assembly – here’s the code for the TypeGenerator class.

    public class TypeGenerator
    {
       private AssemblyBuilder _assemblyBuilder;
       private bool _save;
       private string _assemblyName;
    
       public TypeGenerator WithAssembly(string assemblyName, string filePath = null)
       {
          var currentDomain = Thread.GetDomain();
          _assemblyName = assemblyName;
          _save = !String.IsNullOrEmpty(filePath);
    
          if (_save)
          {
             _assemblyBuilder = currentDomain.DefineDynamicAssembly(
                new AssemblyName(_assemblyName),
                   AssemblyBuilderAccess.RunAndSave,
                      filePath);
          }
          else
          {
             _assemblyBuilder = currentDomain.DefineDynamicAssembly(
                new AssemblyName(_assemblyName),
                   AssemblyBuilderAccess.Run);
          }
          return this;
       }
    
       public static TypeGenerator Create()
       {
          return new TypeGenerator();
       }
    }
    

    The code above will not actually save the assembly but is part of the process we need to go through to actually save it. Let’s add a save method which will actually save the assembly to disk.

    public TypeGenerator Save()
    {
       if (!String.IsNullOrEmpty(_assemblyName))
       {
          _assemblyBuilder.Save(_assemblyName);
       }
       return this;
    }
    

    Note: we’ll also need to assign the assembly name to the Module which we’re about to create.

    Now we need a Module

    Creating the module is simply a case of calling DefineDynamicModule on the AssemblyBuilder that we created, this will give us a ModuleBuilder which is where we’ll start generating our type code.

    As noted, if we are saving the module then we also need to assign it the assembly name, so here’s the code for creating the ModuleBuilder

    public TypeGenerator WithModule(string moduleName)
    {
       if (_save)
       {
          _moduleBuilder = _assemblyBuilder.DefineDynamicModule(
             moduleName, _assemblyName);
       }
       else
       {
          _moduleBuilder = _assemblyBuilder.DefineDynamicModule(
             moduleName);
       }
       return this;
    }
    

    Creating our types

    Considering this post is about IL code generation, it’s taken a while to get to it, but we’re finally here. We’ve created the assembly and within that a module. Our current implementation for generating a type will take the interface as a generic parameter (only interfaces will be handled), here’s the method

    public TypeGenerator WithType<T>()
    {
       var type = typeof(T);
    
       if (type.IsInterface)
       {
          EmitTypeFromInterface(type);
       }
    
       return this;
    }
    

    The EmitTypeFromInterface will start by defining a new type using the ModuleBuilder. We’ll create a name based upon the interface type’s name. Obviously the name needs to be unique. To make things simple we’ll just prefix the text “AutoGenerated”, hence type IService will become implementation AutoGeneratedIService. We’ll also need to set up the TypeAttributes to define our new type as a public class and in our case ensure the new type extends the interface. Here’s the code to generate a TypeBuilder (and also create the constructor for the class)

    private void EmitTypeFromInterface(Type type)
    {
       _typeBuilder = _moduleBuilder.DefineType($"AutoGenerated{type.Name}",
          TypeAttributes.Public |
          TypeAttributes.Class |
          TypeAttributes.AutoClass |
          TypeAttributes.AnsiClass |
          TypeAttributes.BeforeFieldInit |
          TypeAttributes.AutoLayout,
          null, new[] { type });
    
    
       var constructorBuilder =
          _typeBuilder.DefineDefaultConstructor(
             MethodAttributes.Public |
             MethodAttributes.SpecialName |
             MethodAttributes.RTSpecialName);
    
       // insert the following code snippets here
    }
    

    Implementing our methods

    Obviously an interface requires implementations of it’s methods – yes you can actually save the assembly without supplying the methods and will get a TypeLoadException stating that the new type does not have an implementation for the method.

    In the code below we’ll look through the methods on the interface type and using the TypeBuilder we’ll create a MethodBuilder per method which will have the same name, return type and parameters and will be marked as public and virtual, from this we’ll finally get to emit some IL using the ILGenerator. Here’s the code

    foreach (var method in type.GetMethods())
    {
       var methodBuilder = _typeBuilder.DefineMethod(
          method.Name,
          MethodAttributes.Public |
          MethodAttributes.Virtual,
          method.ReturnType,
          method.GetParameters().Select(p => p.ParameterType).ToArray());
    
       var ilGenerator = methodBuilder.GetILGenerator();
    
       // IL Emit code goes here
    }
    

    A very basic overview or writing IL code

    We can generate IL code using an ILGenerator and Emit methods from a C# application (for example). We can also write IL directly as source code files. For example, create a file test.il

    Now add the following code

    .assembly MyAssembly
    {
    }
    
    .method void Test()
    {
    .entrypoint
    ret
    }
    

    The text preceded by the . are directives for the IL compiler (ILASM which comes with Visual Studio). Within the file we’ve firstly declared an assembly named MyAssembly. Whilst this file would compile without the .assembly, it will not run and will fail with a BadImageFormatException.

    Next we define a method (using the .method directive) named Test. The .entrypoint declares this is the entry point to our application (as this will compile to an EXE). Hence unlike C# where we use Main as the entry point, any method may be the entry point but only one method may be marked as the entry point.

    To create a correctly formed method we also need the last line code to be a ret.

    If you now compile this file using

    ilasm test.il
    

    You might notice that ilasm outputs the warning Non-static global method ‘Test’, made static. Obviously in C# our entry method would normally be a static method. Simply add the keyword static as below

    .method static void Test()
    {
    .entrypoint
    ret
    }
    

    Let’s now turn this little IL application into the classic Hello World by calling the Console.WriteLine method.

    If you’ve ever written any assembly code you’ll know we pass arguments to subroutines by placing the arguments on the stack and then the callee will pop the expected number of arguments. So to output a string, we’ll need to push it onto the stack – in this case we use ldstr which specifically handles strings.

    Console.WriteLine is available in the System namespace within mscorlib, and to invoke a method we’ll need to call it specifying the overload (if any) to use along with a fully qualified name, hence our Test method becomes

    .method static void Test() 
    {
    .entrypoint
    
    ldstr "Hello World"
    call void [mscorlib]System.Console::WriteLine(class System.String)
    ret
    }
    

    The easiest way to learn IL is to look at decompilation from tools such as ildasm, ILSpy, Reflector or dotPeek, write code you wish to generate IL for, compile then decompile with one of these tools to see what’s going on.