Using GIT from the CLI

Whilst I use GUI tools, such as the Visual Studio GIT tooling, TortoiseGit etc. I really wanted to get a good understanding on using GIT from the command line. I’ve heard it said that this is the best way to use GIT – whilst I’ll leave that to the reader to decide, let’s get to grips with common tasks etc. from the CLI.

Note: This is not meant to be a comprehensive list of all commands or even all options in the commands I’ve listed. It’s more a list of commands I use often use.

Creating our repository

As you probably know, git is a distributed source control system, so we can create a repository locally (as opposed to requiring a server to host our source code). This means we can create a repository for anything we do, allowing us to commit changes/revert etc.

To create a repository, first create a folder for your files (if you’ve not already done so) then run the following command to create the repository (which will add the .git folder)

git init

This can be run against an empty folder or one with files within it. It will not delete any existing files.

If we want to create a repository that can act as a remote or shared repository we’d use

git init --bare

This remote type of repository is useful in team situations and is special in that no code changes will occur against this repository, it just acts as the PUSH location for each user’s repository.

Staging files

After we’ve add our files or folders we’ll want to stage the files/folders. This step doesn’t always exist in the UI tools (or not in a separate step), but allows us to in essence state what files/folders are to be part of the commit at the specific moment in time. Changes are not committed to the repository and hence changes can be lost. To add all files/folders we can execute

git add .

To add individual files we list each one after the other.

Once staged, we could edit our files further and these changes will not be committed until they too are staged. So think of staging as copying the state of the file/folders at a point in time, ready for committing.

We would tend to stage changes to create a change-set of files, i.e. for a feature or functionality as a single commit.

Unstaging files

To unstage, we simply use

git rm --cached -r .

We replace the . with any specific files that we want to unstage, in such situations the -r can be omitted, i.e.

git rm --cached file1.txt

Status

To find the state of the git repository, including which files are tracked, untracked etc. can be found using

git status

Committing files

When we’re ready to commit our staged files/folders, we execute

git commit

The editor associated with git (for example vi) will be displayed if you run this command and we can now enter a message to differentiate our commit. To commit with a message (i.e. without having to open an editor) simply type

git commit -m "Your message goes here"

Obviously replacing what’s in the speech marks with the log message you want for the commit.

If we want to stage & commit (at least stage files that are already added to the repo.) we can use the -a parameter, i.e.

git commit -a -m "Your message goes here"

Commits are not associated with an incremented number, such as SVN but instead using an SHA1 string. This is a bit of a pain in some ways, especially if, like me, you use the SVN commit number as part of your versioning system, i.e. 1.0.1234 would be the commit 1234 in SVN.

Viewing the log

We can look at the log of commits (i.e. the commit it’s SHA1 and the log message) using

git log

When the : or (END) appears, press q to quit, space bar to continue when you see a :

The log command, by default, is quite verbose, we can reduce this verbosity using

git log --oneline

Which, as the parameter suggests, outputs the logs to a single line, this means only the first 7 characters for the SHA1 string and also a single line from the commit.

Branches

To view the current list of branches we use

git branch

Master is the default branch, equivalent to trunk in SVN etc. So if you’ve not yet created any branches you’ll see only master listed.

To create a new branch off of master we use

git branch branch_name

Obviously replacing branch_name with the name of your branch. Whatever branch we’re in when we create a new branch is taken from the current branch. In other words if you’re in master (usually denoted by the name master in the command prompt, if your prompt is setup) then creating a branch from this will branch master.

Checking out a branch

Switching to a branch is known as checking out a branch. So let’s assume we created a branch named rc1.0 from master, we can checkout (or switch) to this branch using

git checkout rc1.0

All of this happens on your local machine (unless you’re also linked to a remote repo.) so you can create branches cheaply and easily to work on locally. When linked to a remote repo. you can push and pull changes to that repo. to add your branches etc.

We can delete a branch using

git checkout -d rc1.0

If there are changes within the branch you’ll need to use the capitalized version of this parameter, i.e.

git checkout -D rc1.0

Tags are branches

A tag is just a branch which has a special meaning in GIT (which, if I recall is the same in SVN). All we’re really doing with a tag is creating a pointer/reference to a commit and in most cases versioning this in some way (at least conceptually).

So using

git tag -a v1.0

will result in a new tag added and named v1.0, we can list the tags by using

git tag

and delete using

git tag -d v1.0

Merging

Whether you pull changes from a remote repo. or you are simply working on multiple branches and wish to merge changes between them, you use

git merge branch_name

If there are conflicts then you’ll need to resolve those. This is where one of the diff tools that come with git GUI’s is useful, or use your preferred editor.

GIT will be in a merging state. Make your changes such as resolving conflicts etc., then stage and commit your changes.

Cherry picking the commits you want

Occasionally you’ll be in a situation where, maybe you’ve been working on a branch and somebody says, “we need feature X in master” but you’ve made other changes which you do not want merged, in such cases we look to cherry pick just the changes we want.

The first thing we’d need to do, is view the logs of what’s been committed to look for the commit SHA1 code as these are what we’ll need to tell the cherry-pick command what commits to “merge” into our branch. We do not need to switch to the branch we can run

git log branch_name

to get the log for the specified branch.

Now we run

git cherry-pick sha1_string

where the sha1_string is the commit SHA1 – you need not list the whole SHA1 but usually 4 characters, if unique, will suffice, in other words we need to use enough characters to be unique. We do not need to specify the branch as git will figure this out. Conflicts will still need to be resolved, like any merge. If no conflicts exist then the changes are auto-committed, otherwise you’ll need to add and then run

git cherry-pick --continue

Cloning a repository

When taking a copy of a repository, whether from a remote location such as GitHub or cloning a local repository we use the clone command, i.e.

git clone /c/Development/myapp

In this example I’m using git bash to clone from my c:\Development\MyApp repository into the currently selected folder.

Fetch and Pull

After we clone a repository, whether remote or local, we can fetch or pull as well as push to the other repository. This is all part of a distributed source control system.

To fetch changes from a repository we can use

git fetch

and to pull from our remote repository we use

git pull

Both fetch and pull update our local repository. However pull, in essence does a fetch then merge. So the main difference between a fetch and a pull is that a fetch will do an update (in SVN terms) but it doesn’t force you to merge the changes, whereas pull will try to merge changes and hence you may get merge conflicts. Obviously if you’re busy on some work but want to see what’s happening on a remote repo. then you’d just do a fetch.

Push

When changes are made to a local repository and we want to push those to a remote we use

git push

This doesn’t work on our local repository unless it’s been created with the init –bare command.

Adding “remote” repositories

A remote repository, in this instance is any other repository (i.e. it doesn’t have to be in a remote location). If, for example, we created a local repository and then a “remote” using init –bare we need to tell git about the existence of this repository. To do this we use

git remote remote_name url

So for example, we might create a remote named main on our local machine if we wanted, something like this

git remote main /c/Development/mainrepo

Now we can push to the remote using it’s name, i.e.

git push main

If you want to make an upstream/remote location the default (i.e. so you no longer type the remote name) then you can set this using

git push -u remote_name

where remote name in my example is main.

If you need to check the remotes, simply use

git remote -v

Stash

Whilst working on a branch, we might need to switch to another branch or maybe we’ve made changes and want to store them for later use, but not commit. In such cases we can stash our code.

git stash push

This command will temporarily store our changes with no name/text associated with them, this is fine if the stash entry is short lived, but it might be better to assign a name or the likes to the stash entry using

git stash push -m "Your name/message"

We can “pop” the changes back into our repo. using

git stash pop

If we have multiple items stashed, we can pop specific one’s using the –index, for example

git stash pop --index 1

We can list everything in the stash using

git stash list

Reverting

There are several ways we might revert something. We might want to revert our repo. to a previous version, in this case we use the checkout command with the

git checkout 39e5eec

This will result in a detached HEAD, to “rettach” just use

git checkout master

Obviously replace master with your branch name.

If we have changes staged and/or unstaged but want to revert/clear all the changes we can use

git reset --hard

We can revert a commit by using

git revert d9b66e0 

Where the d9b66e0 is the SHA1 commit you wish to revert.

Switching between branches

When we switch between branches we use checkout command, but what about if we switch to a branch and want to switch back to the previous branch.

Sure it’s easy to use the command completion (i.e. tab key) after the checkout command along with part of the name of your branch, but an alternative is to use the option, i.e.

git checkout -

so for example, we might use the following to switch to master then back to the previous branch

git checkout master
git checkout -

Rebase instead of merge

In my post Advanced Git CLI I talked about the rebase command. Rebasing can be used for advanced scenarios but also used in place of a merge in far more simple scenarios.

Let’s assume we’ve created a branch from master and we’ve made a bunch of changes but master has changed. If we merge master we’ll get an entry in the log to show a merge, which is fine. An alternative to merge is to rebase which basically stores your branch changes, then gets the updates from master (in this case) and then applies your changes on top of the master changes in your branch. It’s almost like we’ve branch master as it is now then made our changes.

Note that this still may come with merge conflicts which you will still need to resolve, but it’s makes for cleaner logs.

Here’s an example of the command being used from a branch and rebasing against master

git rebase master

The choice between using rebase or merge may be dependent upon what your want from your project history and rebasing can therefore hide collaboration commits etc. which may not be good. Also rebasing upon a merge (i.e. if you merged from master then made more changes) will probably result in conflicts which have no differences.

See also Merging vs. Rebasing for more information of comparisons between the two commands.

Will there be a merge conflict?

Occasionally we’ll be in a situation where we’re working on our branch and need to check whether there’s any merge conflicts awaiting us. Sadly there’s no simply command that says check-for-conflicts as it were, but we can do the following (after ensuring master, for example, is upto date and running from the branch you wish to merge into)

git merge master --no-ff --no-commit

(obviously replace master with the branch you want to potentially merge into your branch)

The option –no-ff, as the name suggests, tells git merge to not fast forward whilst –no-commit, again probably obvious, tells git merge to not auto-commit. In essence these options will merge from master (in this example) but no make any actual commits to your branch. However the merge will show us any conflicts etc.

Instead of committing this merge we simply abort it using

git merge --abort

so now our branch is back to the state pre-merge.

UWP Application’s file restrictions and logging

UWP applications have restricted access to the local file system and this can cause a few issues when using logging frameworks if you’re expecting to write logs to c:\Temp
or %Temp% locations (for example).

Your UWP application can write to the application’s installation folder, for example

var installedLocation = 
   Windows.ApplicationModel.Package.Current.InstalledLocation.Path;

another alternatively is application data location, such as

var localFolder = 
   ApplicationData.Current.LocalFolder.Path;

// or

var localCache = 
   ApplicationData.Current.LocalCacheFolder.Path;

These last two location will translate to a location such as the following, where username is the (as you’d expect) username the user logged into the machine with, and the GUID is the package family name GUID (taken from the application’s manifest). This is actually an extended GUID.

C:\Users\<username>\AppData\Local\Packages\<guid>

Note: At the time of writing I’m not sure where the string after the underscore which follows the GUID comes from.

Let’s take a quick look at using Serilog’s File Sink to write our log files. Using NuGet install Serilog and Serilog.Sink.File and use the following code

Log.Logger = _logger = new LoggerConfiguration()
   .WriteTo.File(
      new JsonFormatter(renderMessage: true),
      ApplicationData.Current.LocalCacheFolder.Path + "\\log.txt", 
      rollingInterval: RollingInterval.Minute)
      .MinimumLevel.Verbose()
      .CreateLogger();

References

File access permissions

OnIdiom and OnPlatform

When developing a Xamarin Forms cross platform application, we’ll have a shared library where (hopefully) the bulk of our code will go, be it C# or XAML.

However we may still need to make platform specific changes to our XAML, whether it’s images, margins, padding etc. Along with the possible differences per platform we also have the added complexity of the idiom, i.e. desktop, phone or tablet (for example) which might require differences, for example maybe we display more controls on a tablet compared to the phone.

OnPlatform

We’re still likely to need to handle different platforms within this shared code, for example in Xamarin Forms TabbedPage we used OnPlatform like this

<ContentPage.Icon>
   <OnPlatform x:TypeArguments="FileImageSource">
      <On Platform="iOS" Value="history.png"/>
   </OnPlatform>
</ContentPage.Icon>

The above simply declares that on iOS the ContentPage Icon is used and is located in the file history.png.

We can handle more than just images, obviously we might look to handle different margins, fonts etc. depending upon the platform being used.

We can declare values for multiple platforms using comma separated values in the Platform attribute, for example

<OnPlatform x:TypeArguments="FileImageSource">
   <On Platform="iOS, Android" Value="history.png"/>
   <On Platform="UWP" Value="uwphistory.png"/>
</OnPlatform>

OnPlatform’s Platform attribute currently supports iOS, Android, UWP, macOS, GTK, Tizen and WPF.

Ofcourse we can simply use Device.RuntimePatform in code-behind to find out what platform the application is running on if preferred.

OnIdiom

Working in much the same was as OnPlatform. OnIdiom is used for handling different XAML or code based upon the current device is classed as a Unsupported, Phone, Tablet, Desktop, TV or Watch.

In XAML we might change the StackLayout orientation, for example

<StackLayout.Orientation>
   <OnIdiom x:TypeArguments="StackOrientation">
      <OnIdiom.Phone>Vertical</OnIdiom.Phone>
      <OnIdiom.Tablet>Horizontal</OnIdiom.Tablet>
   </OnIdiom>
</StackLayout.Orientation>

We can use the Idiom within code-behind by checking the state of Device.Idiom.

Handling orientation in Xamarin Forms

Many years ago I wrote some Windows CE/Pocket PC code. One of the problems was handling different orientation in the same UI, so for example when the device is switched from Portrait to Landscape – in some cases we can use the same layout in both orientations other times and probably in more cases, this isn’t so simple.

Xamarin Forms does not raise orientation events or the likes, instead we will need to override a Pages OnSizeAllocated method, for example

protected override void OnSizeAllocated(double width, double height)
{
   base.OnSizeAllocated(width, height);

   if (width < height)
   {
       // portrait orientation
   }
   else if (height < width)
   {
       // landscape orientation
   }
   else
   {
      // square layout
   }
}

Note: There is also a Device Orientation plugin/nuget package for Xamarin Forms which raises events when orientation changes.

Now we can handle changes to our layout in one of few ways.

We might be to reparent controls, changing layout orientation and/or changes to rows/columns and so on, all in code. This is fairly efficient in terms of reusing the existing controls but is obviously less useful from a design perspective.

We might be able to create the layout in such a way that we can just change orientation of StackLayout’s etc. Hence changing a few properties to reflect the new orientation. This might be more difficult to setup successfully on complex pages.

An alternative method, which is a little wasteful in turns of control creation, but gives us a good design story, is two design two ContentView’s, one for Portrait and one for Landscape. Obviously we’ll need to bind the controls to the same view model properties etc. However, with this solution we can more rapidly get our UI up and running.

<controls:OrientationView>
   <controls:OrientationView.Landscape>
      <views:LandscapeView />
   </controls:OrientationView.Landscape>
   <controls:OrientationView.Portrait>
      <views:PortraitView />
   </controls:OrientationView.Portrait>
</controls:OrientationView>

The OrientationView, might look like this

public class OrientationView : ContentView
{
   public View Landscape { get; set; }
   public View Portrait { get; set; }
   public View Square { get; set; }

   private Page _parentPage;

   protected override void OnParentSet()
   {
      base.OnParentSet();

      _parentPage = this.GetParentPage();
      if (_parentPage != null)
      {
         _parentPage.SizeChanged += PageOnSizeChanged;
      }
   }

   private void PageOnSizeChanged(object sender, EventArgs eventArgs)
   {
      if (_parentPage.Width < _parentPage.Height)
      {
         Content = Portrait ?? Landscape ?? Square;
      }
      else if (_parentPage.Height < _parentPage.Width)
      {
         Content = Landscape ?? Portrait ?? Square;
      }
      else
      {
         Content = Square ?? Portrait ?? Landscape;
      }
   }
}

Here’s the GetParentPage extension method

public static class ViewExtensions
{
   public static Page GetParentPage(this VisualElement element)
   {
      if (element != null)
      {
         var parent = element.Parent;
         while (parent != null)
         {
            if (parent is Page parentPage)
            {
               return parentPage;
            }
            parent = parent.Parent;
         }
      }
      return null;
   }
}

This allows us to basically design totally different UI’s for portrait and landscape, maybe adding extra controls in landscape and removing in portrait. The obvious downside is the duplication of controls.

Self hosting Tomcat in a Java Web application

Following on from my previous post where I created a web application in Java, let’s now look at hosting the WAR within an embedded Tomcat instance.

Adding dependencies to include embedded Tomcat

Add the following to your pom.xml (after the description tag)

<properties>
   <tomcat.version>9.0.0.M6</tomcat.version>
</properties>

Note: There’s a newer version of a couple of the dependencies, but this version exists for all three of the dependencies we’re about to add.

Now, add following dependencies

<dependencies>
   <dependency>
      <groupId>org.apache.tomcat.embed</groupId>
      <artifactId>tomcat-embed-core</artifactId>
      <version>${tomcat.version}</version>
   </dependency>
   <dependency>
      <groupId>org.apache.tomcat.embed</groupId>
      <artifactId>tomcat-embed-jasper</artifactId>
      <version>${tomcat.version}</version>
   </dependency>
   <dependency>
      <groupId>org.apache.tomcat.embed</groupId>
      <artifactId>tomcat-embed-logging-juli</artifactId>
      <version>${tomcat.version}</version>
   </dependency>
</dependencies>

Time to run mvn install if not auto-importing.

Time to create the entry point/application

Let’s add a new package to the src folder, com.putridparrot now add a Java file, mine’s HostApp.java, here’s the code

package com.putridparrot;

import org.apache.catalina.LifecycleException;
import org.apache.catalina.startup.Tomcat;

import javax.servlet.ServletException;
import java.io.File;

public class HostApp {
    public static void main(String[] args) throws ServletException, LifecycleException {

        Tomcat tomcat = new Tomcat();
        tomcat.setBaseDir("temp");
        tomcat.setPort(8080);

        String contextPath = "";
        String webappDir = new File("web").getAbsolutePath();

        tomcat.addWebapp(contextPath, webappDir);

        tomcat.start();
        tomcat.getServer().await();
    }
}

In the above we create an instance of Tomcat and then set up the port and the context for the web app, including the path to the web folder where our index.jsp is hosted in our WAR file.

We then start the server and then wait until the application is closed.

By the way, it’s also worth adding logging to the pom.xml. The embedded Tomcat server using “standard” Java based logging, so we can add the following to the pom.xml dependencies

<dependency>
   <groupId>log4j</groupId>
   <artifactId>log4j</artifactId>
   <version>1.2.15</version>
</dependency>

Create a run configuration

Now select Edit Configuration and add a configuration that runs main from HostApp.

Adding gzip/compression support, before we start the server. Hence select Application and set Main class to com.putridparrot.HostApp.

Testing

Run the newly added application configuration, don’t worry about the exceptions. You should see a line similar to

INFO: Starting ProtocolHandler [http-nio-8080]

At this point the server is running, so navigate your browser to http://localhost:8080 and check that the index.jsp page is displayed.

Configuring the embedded Tomcat for gzip/compression support

I’ve been looking into compression with gzip on some web code and hence wanted to configure this embedded Tomcat server to handle compression if/when requested via Accept-Type: gzip etc.

So add the following to the main method (before tomcat.start())

Connector c = tomcat.getConnector();
c.setProperty("compression", "on");
c.setProperty("compressionMinSize", "1024");
c.setProperty("noCompressionUserAgents", "gozilla, traviata");
c.setProperty("compressableMimeType", "text/html,text/xml,text/css,application/json,application/javascript");
tomcat.setConnector(c);

You’ll also need the import import org.apache.catalina.connector.Connector;.

Testing gzip/compression

To test whether Tomcat is using compression is best done with something like curl. I say this because, whilst you can use a browser (such as Chrome’s) debug tools and see a response with Content-Type: gzip, I really wanted to see the raw compressed data to feel I really was getting compressed responses, the browser automatically decompressed the responses for me.

Go the main method and just change “on” to “force”

i.e.

c.setProperty("compression", "force");

This just forces compression to be on all the time.

Now to test our server is set up correctly. Thankfully Windows 10 seems to have curl available from a command prompt, so this works in Linux or Windows.

Run the following

curl -H "Accept-Encoding: gzip,deflate" -I "http://localhost:8080/index.jsp"

This command adds the header Accept-Encoding and then outputs the header (-I) results from accessing the URL. This should show a Content-Encoding: gzip if everything was set up correctly.

To confirm everything is as expected, we can download the content from the URL and save it (in this case saved to index.jsp.gz) and then use gzip -d to decompress the file if we wish.

curl -H "Accept-Encoding: gzip,deflate" "http://localhost:8080/index.jsp" -o index.jsp.gz

This will create index.jsp.gz which should be compressed so we can use

gzip -d index.jsp.gz

to decompress it and we should see the expected web page.

Creating a Java Web Application with IntelliJ

Creating our project

  • Choose File | New | Project
  • Select Java Enterprise
  • Then tick the Web Application

If you have an application server setup, select or add it using New, I’m going to ultimately add an embedded Tomcat container, so leaving this blank.

Finally give the project a name, i.e. MyWebApp.

Adding a Maven pom.xml

I want to use Maven to import packages, so add a pom.xml to the project root add then supply the bare bones (as follows)

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>MyWebApp</groupId>
    <artifactId>MyWebApp</artifactId>
    <packaging>war</packaging>
    <name />
    <version>MyWebApp.0.0.1-SNAPSHOT</version>
    <description />

</project>

In IntelliJ, select the pom.xml, right mouse click and select Add as Maven project.

Next, we want to tell Maven how to compile and generate our war, so add the following after the description tag in the pom.xml

<build>
   <sourceDirectory>${basedir}/src</sourceDirectory>
   <outputDirectory>${basedir}/web/WEB-INF/classes</outputDirectory>
   <resources>
      <resource>
         <directory>${basedir}/src</directory>
         <excludes>
            <exclude>**/*.java</exclude>
         </excludes>
      </resource>
   </resources>
   <plugins>
      <plugin>
         <artifactId>maven-war-plugin</artifactId>
         <configuration>
            <webappDirectory>${basedir}/web</webappDirectory>
            <warSourceDirectory>${basedir}/web</warSourceDirectory>
         </configuration>
      </plugin>
      <plugin>
         <artifactId>maven-compiler-plugin</artifactId>
         <configuration>
            <source>1.8</source>
            <target>1.8</target>
         </configuration>
      </plugin>
      <plugin>
         <groupId>org.apache.maven.plugins</groupId>
         <artifactId>maven-compiler-plugin</artifactId>
         <configuration>
            <source>8</source>
            <target>8</target>
         </configuration>
      </plugin>
   </plugins>
</build>

Creating a run configuration

Whilst my intention is to add a Tomcat embedded server, we can create a new run configuration at this point to test everything worked.

Select the Run, Edit Configuration option from the toolbar or Run | Edit Configuration and click + and add a Tomcat Server | Local.

Mine’s set with the URL http://localhost:8080/. Give the configuration a name and don’t forget to select the Deployment tab, press the + and then click Aritifact… I selected MyWebApp:war.

Press OK and OK again to finish the configuration and now you can run Tomcat locally and deploy the war.

Don’t forget to execute mvn install to build you war file.

Object mapping with TinyMapper

I’ve written a few posts on AutoMapper in the past.

AutoMapper and the subject of this post, TinyMapper, are object mappers which basically means translating/mapping one object to another object. That is to say you have (for example) a DTO (data transfer object) used to transfer data via REST or SOAP services. This object is in a format best suited to the transfer process, or is simply a format that doesn’t match with how your application domain models might use the data. This is not the only scenario for using object mapping, our objects might match, but types differ in some subtle (or not so subtle) ways or we want to simple using an object mapper to clone/copy objects that are of the same type.

Let’s look at a simple example

Let’s take the simplest of examples whereby we have two objects which look almost exactly alike, differing only in one of the types used

public class PersonDto
{
   public string FirstName { get; set; }
   public string LastName { get; set; }
   public string DateOfBirth { get; set; }
}

public class Person
{
   public string FirstName { get; set; }
   public string LastName { get; set; }
   public DateTime DateOfBirth { get; set; }
}

As you can see, the property names are the same, but the DateOfBirth string needs to map to a System.DateTime.

Let’s use TinyMapper to map our objects

AutoMapper is an excellent tool for object mapping, but sometimes it might feel like using a “sledgehammer to crack a nut”. If you are not using it’s full set of capabilities or more importantly find that it’s a little slow for your needs, TinyMapper is an option.

TinyMapper purports to be quite a bit faster than AutoMapper, but obviously at the expense of all the AutoMapper capabilities. If you don’t need everything AutoMapper offers then TinyMapper makes a lot of sense.

Add the nuget package TinyMapper to your project and here’s some code

var dto = new PersonDto
{
   FirstName = "Scooby",
   LastName = "Doo",
   DateOfBirth = "13 September 1969"
};

TinyMapper.Bind<Person, PersonDto>();

var person = TinyMapper.Map<Person>(dto);

Note: Scooby do was no born on 13th September 1969, this was when the show debuted on CBS.

The line TinyMapper.Bind<Person, PersonDto>(); sets up TinyMapper so that it knows it’s meant to map these two types together when required to using the Map method. The first generic argument is the source and the second the target type. However it really just means which two types do you want to bind together, so the order seems unimportant.

The final line then tells TinyMapper to Map the PersonDto to a new instance of a Person.

As you might expect, because the property names are the same, this is all we need for the mapping to succeed, the DateOfBirth type difference doesn’t matter as these are convertible from one type to the other. The names, however do matter.

What if my property names don’t match

If your property names differ, then we simply need to tell TinyMapper what should map to what. Let’s change our Dto DateOfBirth property to DOB. If you run the previous code you’ll get (as you probably expect) a null or default value for missing mappings, so in this case the Person DateOfBirth property will equal DateTime.MinValue.

If we change our Bind method to the following

TinyMapper.Bind<PersonDto, Person>(cfg =>
{
   cfg.Bind(o => o.DOB, o => o.DateOfBirth);
});

All other properties remain mapped, based upon the property names, but now DOB is mapped to DateOfBirth.

Ignoring properties

Sometimes we want to not map a property, i.e. ignoring it. This can be handled via the configuration of the Bind method like this

TinyMapper.Bind<PersonDto, Person>(cfg =>
{
   cfg.Ignore(o => o.LastName);
}

Summary

TinyMapper is very simple to use and might has minimal configuration options, but it’s also very fast, so depending upon your needs, TinyMapper might suit better as the object mapper
for your project.

Starting out with web components

Introduction

Web components are analogous to self-contained controls or components as you’ll have seen in Windows Forms, WPF etc. They allow developers to package up style, scripts and HTML into a single file which also is used to create a custom element, i.e. not part of the standard HTML.

Sadly, still not yet available in all browsers (although I believe Polyfills exist for the main ones). Hence, for this post you’ll either need a polyfill or the latest version of Google Chrome (which I’m using to test this with).

What does it look like using web components?

To start with we create an HTML file for our web component, we then define the template for the component – this is basically the combination of style and the control’s HTML. Finally we write scripts to effect the shadow DOM by interacting with our web component.

In this post we’re going to create a simple little web component which will flash it’s text. Before we look at the component code let’s see the component in use within the following index.html file

<!DOCTYPE html>
<html>
<head>
    <meta charset="utf-8">
    <title>Web Component Test</title>
    <link rel="import" href="flashtext.html">
</head>
<body>
   <flash-text data-text="Hello World"></flash-text>
</body>
</html>

The key points here are that we include the web component using the line

<link rel="import" href="flashtext.html">

and then we use the custom element flash-text with it’s custom attributes and we use the component like this

<flash-text data-text="Hello World"></flash-text>

It’s important to note the closing element, a self closing element is not correctly handled (at least at this time using Google Chrome) and whilst the first usage of a web component might be displayed, if we had several flashing text components in the index.html it’s possible only one would be displayed and no obvious error or reason for the other components not displaying.

Creating our web component

We already decided our component will be stored in flashtext.html (as that’s what we linked to in the previous section).

To start off with, create the following

<template id="flashTextComponent">
</template>
<script>
</script>

We’ve created a new template with the id flashTextComponent. This id will be used within our script, it’s not what the pages using the component uses. To create a new custom element by adding the following to the script

var flashText = document.registerElement('flash-text', {
});

But we’re getting a little ahead of ourselves. Let’s instead create some styling and the HTML for our component. Within the template element, place the following

<style>
   .flashText {
      float: left;
      width: 152px;
      background-color: red;
      margin-bottom: 20px;
   }

   .flashText > .text {
      color: #fff;
      font-size: 15px;
      width: 150px;
      text-align: center;
   }

</style>
<div class="flashText">
   <div class="text"></div>
</div>

The style section simply defines the CSS for both the flashText div and the text div within it. The div elements create our layout template. Obviously if you created something like a menu component, the HTML for this would go here with custom attributes which we’ll define next, mapping to the template HTML.

Next up we need to create the code and custom attributes to map the data to our template. Before we do this let’s make sure the browser supports web components by writing

var template = document.createElement('template');
if('content' in template) {
   // supports web components
}
else {
   // does not support web components
}

If no content exists on the template, Google Chrome will report the error in the dev tools stating content is null (or similar wording).

Within the // supports web components section place the following

var ownerDocument = document.currentScript.ownerDocument;
var component = ownerDocument.querySelector('#flashTextComponent');

var templatePrototype = Object.create(HTMLElement.prototype);

templatePrototype.createdCallback = function () {
   var root = this.createShadowRoot();
   root.appendChild(document.importNode(component.content, true));

   var name = root.querySelector('.text');
   name.textContent = this.getAttribute('data-text');

   setInterval(function(){
      name.style.visibility = (name.style.visibility == 'hidden' ? '' : 'hidden');
   }, 1000);
};

var flashText = document.registerElement('flash-text', {
    prototype: templatePrototype
});

Let’s look at what we’ve done here. First we get at the ownerDocument and then locate our template via it’s id flashTextComponent. Now were going to create an HTMLElement prototype which will (in essence) replace our usage of the web component. When the HTMLElement is created we interact with the shadow DOM placing our component HTML into it and then interacting with parts of the template HTML, i.e. in this case placing data from the data-text custom attribute, into the text content of the div text.

As we want this text to flash we implement the script for this and attached to the visibility style of the text.

Finally, as mentioned previously, we register our custom element and “map” it to the previously created prototype.

Using in ASP.NET

ASP.NET can handle static pages easily enough, we just need to add the following to the RouteConfig

routes.IgnoreRoute("{filename}.html");

Now, inside _Layout.cshtml put in the head section

and within the Index.cshtml (or wherever you want it) place your custom elements, i.e.

<flash-text data-text="Hello World"></flash-text>

References

https://developer.mozilla.org/en-US/docs/Web/HTML/Element/template
https://developers.google.com/web/fundamentals/web-components/customelements
https://www.webcomponents.org/
https://stackoverflow.com/questions/45418556/whats-the-reason-behind-not-allowing-self-closing-tag-in-custom-element-in-spec

Code for this post

https://github.com/putridparrot/blog-projects/tree/master/webcomponent

fuslogvw, how could I forget you?

This is one of those reminder’s to myself…

Don’t forget that you can use fuslogvw to find problems loading assemblies.

Why do I need reminding?

I had an issue whereby a release build of an application I was working on had been configured for live/prod for the first time and somebody went to test the application which simply failed at start-up – just displaying a Windows dialog asking whether I wanted to close or debug the application.

Ofcourse, the application worked perfectly on my machine and oddly the non-prod versions also worked fine on the other user’s machine. However the live/prod release had one change to the previous build. A new feature had been removed which wasn’t ready to go live and unbeknown to me, the removal of it’s project caused the build to deploy older versions of a couple of DLL’s as part of the new live/prod build.

On my machine this wasn’t an issue as .NET located the newer versions of the DLL’s, on the other user’s machines these could not be located.

This is fine, it was all part of pre-release testing cycle but a little confusing as all the non-prod configurations worked fine on other user’s machines.

When do we get to fuslogvw?

I haven’t had the need to use fuslogvw for ages, but really should probably use it a lot more. To be honest, it makes sense to have it on all the time to catch such potential issues.

What fuslogvw can do is list any failures during start-up of a .NET application. Running fuxlogvw.exe from the Windows SDK folder (for example C:\Program Files (x86)\Microsoft SDKs\Windows\v10.0A\bin\NETFX 4.7.1 Tools) will result in the fuslog UI being displayed. As you might have guessed from the name the fus-log-vw is a viewer for the fusion log files.

Note: You must run fuslogvw as Admin if you want to change the settings, i.e. set the path of the log files, change the level of loggin, i.e. I just wanted to log all the bind failures, etc.

Leave fuslogvw open and then run the .NET application you’re wanting to take a look at. Now you’ll need to press the Refresh button on fuslogvw to see the logs after the application has started and depending on what you are logging you may see either a list of assemblies that were loaded (for example when just logging everything) or see failures etc.

We can now see what assembly and what version of that assembly the application (we’re monitoring) tried to load and we can inspect the log itself for more information on the failure (via the log file or the fuslogvw).

It’s as simple as that.

Triangular numbers

Introduction

I was listening to a podcast on Gauss where they talked about a story that, when as a youngster Gauss (and his classmates) were asked to calculate the sum of 1 to 100, Gauss came back with the following solution.

If we take the first and last number and add them, we get 101, if we take the 2nd and 2nd last number (2 + 99) we also get the result 101, hence if we do continue this process for half of the numbers we’ll get the result of the sum of 1 to 100

Please note, I’m not attributing the discovery of this formula to Gauss, so much as it just inspired me to look into this a little further.

So he we have [1, 100] and what we want to do is 1 + 2 + 3 + 4 … + 100, using Gauss’s observation, we can actually calculate this as (1 + n) * n / 2, i.e. the last number added to the first gives us 101 then multiply by 50 (i.e. 100/2) which gives us the result 5050.

The formula for this summation process is usually given as

n(n + 1)/2

which is obviously the same as we’ve listed, just rearranged.

Triangular numbers

What n(n + 1)/2 gives us, is a list of, what’s known as, triangular numbers.

If we think about having marbles and with those, start from 1 marble creating equilateral triangles, we will need three marbles to create the next equilateral triangle, then 6 marbles then 10 and so on, this pattern can be calculated by 1, 1 + 2, 1 + 2 + 3, 1 + 2 + 3 + 4… and hence is the same as n(n + 1)/2.

Is x a triangular number

To determine if a number is triangular we need to rearrange our equation as follows

n(n + 1) / 2 = x
(n^2 + n) / 2 = x
n^2 + n = 2x

This can be rewritten as

n = (sqrt(8x + 1) – 1)/2

Now if n is a perfect square (i.e. no decimal points) then the number is triangular. We can use the following in code

var n = (Math.Sqrt(8 * x + 1) - 1) / 2;
return Math.Floor(n) == n;

to test if the result is a perfect square.

A perfect square can be seen to be a squared number, i.e. 0^2, 1^2, 2^2 etc. are squared integers, hence if we square root a value we expect a non decimal number for it to be a perfect square.

References

http://mathforum.org/library/drmath/view/57162.html
http://www.maths.surrey.ac.uk/hosted-sites/R.Knott/runsums/triNbProof.html