Author Archives: purpleblob

Deploying your React application to IIS

We’re going to simply go through the steps for creating a React application (it’ll be the default sample application) through to deployment to IIS.

Create the React application using yarn

First off let’s create a folder in your development folder. Next, via your preferred command prompt application (and ofcourse assuming you have yarn installed etc.). Execute the following from your command prompt within your newly created folder

yarn create react-app my-app --typescript

this will create a bare bones sample application and grab all the dependencies required for it to run.

Does it run?

If you want to verify everything working, just change directory to the application we created (we named it my-app in the previous yarn command) and execute the following command

yarn start

A node server will run and your preferred browser will display the React application (assuming everything worked).

Building our application

At this point our application will happily run via yarn, but for deployment, we need to build it, using

yarn build

This will create a build folder off our our my-app folder.

Creating an IIS website to our React application

We’re going to now simply create a website within IIS which points the the same folder we just created (obviously if you prefer you can deploy the code to the inetpub folder).

In the Internet Information Services (IIS) Manager control panel (this information is specific to IIS 7.0, things may differ in newer versions but the concepts will be the same).

  • Select the Sites connections, right mouse click on it
  • Select Add Web Site
  • Supply a site name, mine’s sample As can be seen the name need not be related to our application name
  • Set the Physical path to build folder created in the previous build step, so for example this will be {path}/my-app/build
  • Port’s 80 and 8080 are usually already set-up and being used by the default web site, so change the port to 5000 and press OK to complete the addition of the website.

At this point if you try to view the new website using http://localhost:5000 you’ll probably get an error, probably stating access is denied. As this example has our source outside of the inetpub folder, we will need to change IIS permissions.

From the Internet Information Services (IIS) Manager control panel

  • Right mouse click on the sample website
  • Select Edit Permissions
  • Select the Security tab
  • Click the Edit button
  • Now click the Add… button
  • If you’re on a domain controller you may need to change Locations… to your machine name, then within the Enter the object names to select, type in IIS_IUSRS and press Check Names, if all went well this will underline the text and no errors will be displayed, now press OK
  • Keep pressing OK on subsequent dialogs until you’re back to the IIS control panel

If you try refreshing the webpage, it’ll probably display Access Denied. We need to allow anonymous access to the site in this case.

From the Internet Information Services (IIS) Manager control panel

  • Select Application Pools
  • Double click on the sample application
  • Change .NET Framework version to No Managed Code

From the Internet Information Services (IIS) Manager control panel

  • Select the sample web site and from the right hand pane of IIS double click on Authentication
  • Right mouse click on Anonymous Authentication and select Application pool identity then press OK

Refreshing the browser should now display the React logo and sample application should be running.

React and serve

During development of our React application, we’ll be using something like

yarn start

When we’re ready to deploy our application, we’ll use

yarn build

Which, in the case of React Typescript, will transpile to JavaScript and package files ready for deployment.

We can also serve the files within the build folder using the serve application.

Installing serve

To install serve, execute

yarn global add serve

This will add server to the global location. Normally (without the global command) packages etc. are installed local to the folder you’re working in. In the case of global packages, these will be installed in C:\Users\{username}\AppData\Roaming\npm\bin on Windows.

To check the location on your installation run

yarn global bin

Running serve on our build

Once serve is installed we can run it using

serve -s build

Note: Obviously if the global location is not in your path you’ll need to prefix the command with the previously found location

Reactive Extensions (Rx) in JavaScript (rxjs)

Reactive Extensions are everywhere – I wanted to try the JavaScript version of the library, so below is a sample React component demonstrating a “fake” service call that might occur within the fetch function. The setTimeout is used simply to simulate some latency in the call.

To add rxjs simple use

yarn add rxjs

Now here’s the sample component code…

import {Observable, Subscriber, Subscription} from 'rxjs';

interface ServiceState {
  data: string;
}

class ServiceComponent extends Component<{}, ServiceState> {

  _subscription!: Subscription;

  constructor(prop: any) {
    super(prop);
    this.state = {
      data : "Busy"
    };
  }

  fetch() : Observable<string> {
    return Observable.create((o: Observer<string>) => {
      setTimeout(() => {
        o.next('Some data');
        o.complete();
      }, 3000);
    });
  }

  componentDidMount() {
    this._subscription = this.fetch().subscribe(s => {
      this.setState({data: s})
    });
  }

  componentWillUnmount() {
    if(this._subscription) {
      this._subscription.unsubscribe();
    }
  }  

  render() {
    return (
      <div>{this.state.data}</div>
    );
  }
}

In this example we have the fetch function returns an instance of an Observable of type string. We then subscribe to this within the componentDidMount function, which is called when the component is inserted into the DOM and then we subscribe to the Observable, updates will be applied to the component’s state.

The componentWillUnmount is called when the component is removed from the DOM and hence we unsubscribe from the Observable.

Adding OneDrive support to an application

I wanted to create an application which stores data for synchronisation between devices, so figured the cheapest way might be to use one of the free storage mediums, in this case OneDrive.

  • Create an application, mine’s a WPF app.
  • Add the NuGet package Microsoft.OneDriveSDK
  • Add the NuGet package Microsoft.OneDriveSdk.Authentication

Note: Windows allows the storage of data to the user’s roaming profile to be stored in the “cloud”. So if your application is solely Windows based this might be a better solution than using OneDrive directly.

Registering our application

Before we get into any coding we’ll need to register our application with the Microsoft Application Registration Portal, instructions can be found here.

Basically you’ll be prompted to log into the Microsoft Application Registration Portal, then you should Add your application, supplying it with a name. Once completed, this will supply us with an application id and application secrets.

As my sample application is a native application then I don’t need to create an app password.

Just click Add Platform and add the platforms you are wanting to support (again mine’s a simple native application).

Writing some code

First off we’ll need to authenticate the user, so we use an authentication adapter. Here’s my code

var msaAuthenticationProvider = new MsaAuthenticationProvider(
   AppId,
   "https://login.live.com/oauth20_desktop.srf",
   new[] { "onedrive.readwrite", "offline_access" });

When you run this code, you’ll be presented with a login screen requiring username/email and password, then a dialog to tell you your application is requesting permissions to access OnDrive and asking you to grant or deny them.

Authorisation scopes, such as onedrive.readonly can be found here.

Now we need to authenticate and access the one drive client.

await msaAuthenticationProvider.AuthenticateUserAsync();

var oneDriveClient = new OneDriveClient(
   "https://api.onedrive.com/v1.0", 
   msaAuthenticationProvider);

Once we get an instance of the OneDriveClient we can interact with the One Drive files/file system. Here’s an example snippet of code which gets to a know folder location, for example imagine we have a folder named Public/MyApp then we could use

var item = await 
   oneDriveClient
      .Drive
      .Root
      .ItemWithPath("Public/MyApp")
      .Request()
      .GetAsync();

We can also use a “special folder” which the application data is stored to by changing the permission scope to onedrive.appfolder, for example

var msaAuthenticationProvider = new MsaAuthenticationProvider(
   AppId,
   "https://login.live.com/oauth20_desktop.srf",
   new[] { "onedrive.appfolder", "offline_access");


await msaAuthenticationProvider.AuthenticateUserAsync();

var item = await oneDriveClient
   .Drive
   .Special
   .AppRoot
   .Request()
   .GetAsync();

Further Reading

OneDrive Dev Center
Official projects and SDKs for Microsoft OneDrive
Using an App Folder to store user content without access to all files

Using Microsoft.Extensions.Logging

The Microsoft.Extensions.Logging namespace includes interfaces and implementations for a common logging interface. It’s a little like common-logging and whilst it’s not restricted to ASP.NET Core it’s got things entry points to work with ASP.NET Core.

What’s common logging all about?

In the past we’ve been hit with the problem of multiple logging frameworks/libraries which have slightly different interfaces. On top of this we might have other libraries which require those specific interfaces.

So for example, popular .NET logging frameworks such as log4net, NLog, Serilog along with the likes of the Microsoft Enterprise Block Logging might be getting used/expected in different parts of our application and libraries. Each ultimately offers similar functionality but we really don’t want multiple logging frameworks if we can help it.

The common-logging was introduced a fair few years back to allow us to write code with a standarised interface, but allow us to use whatever logging framework we wanted behind the scenes. Microsoft’s Microsoft.Extensions.Logging offers a similar abstraction.

Out of the box, the Microsoft.Extensions.Logging comes with some standard logging capabilities, Microsoft.Extensions.Logging.Console, Microsoft.Extensions.Logging.Debug, Microsoft.Extensions.Logging.EventLog, Microsoft.Extensions.Logging.TraceSource and Microsoft.Extensions.Logging.AzureAppServices. As the names suggest, these give us logging to the console, to debug, to the event log, to trace source and to Azure’s diagnostic logging.

Microsoft.Extensions.Logging offers is a relatively simple mechanism for adding further logging “providers” and third part logging frameworks such as NLog, log4net and SeriLog.

How to use Microsoft.Extensions.Logging

Let’s start by simply seeing how we can create a logger using this framework.

Add the Microsoft.Extensions.Logging and Microsoft.Extensions.Logging.Debug NuGet packages to your application.

The first gives us the interfaces and code for the LoggerFactory etc. whilst the second gives us the debug extensions for the logging factory.

Note: The following code has been deprecated and replaced with ILoggerBuilder extensions.

var factory = new LoggerFactory()
   .AddDebug(LogLevel.Debug);

Once the logger factory has been created we can create an ILogger using

ILogger logger = factory.CreateLogger<MyService>();

The MyService may be a type that you want to create logs for, alternatively you can pass the CreateLogger a category name.

Finally, using a standard interface we can log something using the following code

logger.Log(LogLevel.Debug, "Some message to log");

Using other logging frameworks

I’ll just look at a couple of other frameworks, NLog and Serilog.

For NLog add the NuGet package NLog.Extensions.Logging, for Serilog add Serilog.Extensions.Logging and in my case I’ve added Serilog.Sinks.RollingFile to create logs to a rolling file and Serilog.Sinks.Debug for debug output.

Using NLog

Create a file named nlog.config and set it’s properties within Visual Studio as Copy always. Here’s a sample

<?xml version="1.0" encoding="utf-8" ?>
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd" 
   xsi:schemaLocation="NLog NLog.xsd"
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   autoReload="true"
   internalLogFile="c:\temp\mylog.log"
   internalLogLevel="Info" >


<targets>
   <target xsi:type="File" name="fileTarget" 
      fileName="c:\temp\mylog.log"      
      layout="${date}|${level:uppercase=true}|${message}
         ${exception}|${logger}|${all-event-properties}" />
   <target xsi:type="Console" name="consoleTarget"
      layout="${date}|${level:uppercase=true}|${message} 
         ${exception}|${logger}|${all-event-properties}" />
</targets>

<rules>
   <logger name="*" minlevel="Trace" writeTo="fileTarget,consoleTarget" />
</rules>
</nlog>

Now in code we can load this configuration file using

NLog.LogManager.LoadConfiguration("nlog.config");

and now the only difference from the previous example of using the LoggerFactory is change the creation of the factory to

var factory = new LoggerFactory()
   .AddNLog();

Everything else remains the same. Now you should be seeing a file named mylog.log in c:\temp along with debug output.

Using serilog

In Serilog’s case we’ll create the logger configuration in code, hence here’s the code to create both a file and debug log

Note: See Serilog documentation for creating the configuration via a config or settings file.

Log.Logger = new LoggerConfiguration()
   .MinimumLevel.Debug()
   .WriteTo.RollingFile("c:\\temp\\log-{Date}.txt")
   .WriteTo.Debug()
   .CreateLogger();

The will create a log file in c:\temp named log-{Date}.txt, where {Date} is replaced with today’s date. Obviously the include of WriteTo.Debug also gives us debug output.

We simply create the logger using familiar looking code

var factory = new LoggerFactory()
   .AddSerilog();

Everything else works the same but now we’ll see Serilog output.

Creating our own LoggerProvider

As you’ve seen in all examples, extension methods are used to AddDebug, AddNLog, AddSerilog. Basically each of these adds an ILoggerProvider to the factory. We can easily add our own provider by implementing the ILoggerProvider interface, here’s a simple example of a DebugLoggerProvider

public class DebugLoggerProvider : ILoggerProvider
{
   private readonly ConcurrentDictionary<string, ILogger> _loggers;

   public DebugLoggerProvider()
   {
      _loggers = new ConcurrentDictionary<string, ILogger>();
   }

   public void Dispose()
   {
   }

   public ILogger CreateLogger(string categoryName)
   {
      return _loggers.GetOrAdd(categoryName, new DebugLogger());
   }
}

The provider needs to keep track of any ILogger’s created based upon the category name. Next we’ll create a DebugLogger which implements the ILogger interface

public class DebugLogger : ILogger
{
   public void Log<TState>(
      LogLevel logLevel, EventId eventId, 
      TState state, Exception exception, 
      Func<TState, Exception, string> formatter)
   {
      if (formatter != null)
      {
         Debug.WriteLine(formatter(state, exception));
      }
   }

   public bool IsEnabled(LogLevel logLevel)
   {
      return true;
   }

   public IDisposable BeginScope<TState>(TState state)
   {
      return null;
   }
}

In this sample logger, we’re going to handle all LogLevel’s and are not supporting BeginScope. So all the work is done in the Log method and even that is pretty simple as we use the supplied formatter to create our message then output it to our log sink, in this case Debug. If no formatter is passed, we’ll currently not output anything, but obviously we could create our own formatter to be used instead.

Finally, sticking with the extension method pattern to add a provider, we’ll create the following

public static class DebugLoggerFactoryExtensions
{
   public static ILoggerFactory AddDebugLogger(
      this ILoggerFactory factory)
   {
      factory.AddProvider(new DebugLoggerProvider());
      return factory;
   }
}

That’s all there is to this very simple example, we create the factory in the standard way, i.e.

var factory = new LoggerFactory()
   .AddDebugLogger();

and everything else works without any changes.

Running tibrvlisten

I keep forgetting this command, so time to blog about it

Occasionally, we’ll want to monitor RV messages being received on our host machine, to do this we can use the tibrvlisten.exe application from the command prompt along with the appropriate arguments. tibrvlisten.exe will filter and display the RV messages as they arrive on the host computer.

An example of use might be

tibrvlisten -service 1234 -network ;123.123.123.123 "mytopics.>"

The -service is the port being used for RV, followed by the IP address of the network it’s on. Note the ; preceding the IP address. Also in this case, I’m filtering what tibrvlisten outputs to the screen by displaying only those messages which start with mytopics. and using the wildcard > symbol.

A few dir/ls/Get-ChildItem Powershell commands

Get-ChildItem is aliased as ls/dir (as well as gci), so is used to list files and directories.

Here’s a few useful commands

Find only folders/directories

ls -Directory

Find only files

ls -File

Find files with a given extension

ls -Recurse | where {$_.Name -like "*.bak"}

Only search to a certain depth

The above will recurse over all folders, but we might want to restrict this by a certain depth level, i.e. in this example up to 2 directories deep

ls -Recurse -Depth 2 | where {$_.Name -like "*.bak"}

Finding the directories with files with a given extension

ls -Recurse -Depth 2 | where {$_.Name -like "*.bak"} | select directory -Unique

The use of -Unique ensure we do not have multiple directories with the same path/name (i.e. for each .bak file found within a single directory)

Speech synthesis in Windows with C#

As part of a little side project, I wanted to have Windows speak some text is response to a call to a service. Initially I started to look at Cortana, but this seemed overkill for my initial needs (i.e. you need to write a bot, deploy it to Azure etc.), whereas System.Speech.Synthesis offers a simple API to use text to speech.

Getting started

It can be as simple as this, add the System.Speech assembly to your references then we can use the following

using (var speech = new SpeechSynthesizer())
{
   speech.Volume = 100; 
   speech.Rate = -2; 
   speech.Speak("Hello World");
}

Volume takes a value between [0, 100] and Rate which is the speaking rate can range between [-10, 10].

Taking things a little further

We can also look at controlling other aspects of speech such as emphasis, when the word should be spelled out and ways to create “styles” for different sections of speech. To use these we can use the PromptBuilder. Let’s start by create a “style” for a section of speech

var promptBuilder = new PromptBuilder();

var promptStyle = new PromptStyle
{
   Volume = PromptVolume.Soft,
   Rate = PromptRate.Slow
};

promptBuilder.StartStyle(promptStyle);
promptBuilder.AppendText("Hello World");
promptBuilder.EndStyle();

using (var speech = new SpeechSynthesizer())
{
   speech.Speak(promptBuilder);
}

We can build up our speech from different styles and include emphasis using

promptBuilder.AppendText("Hello ", PromptEmphasis.Strong);

we can also spell out words using

promptBuilder.AppendTextWithHint("Hello", SayAs.SpellOut);

Speech Recognition

The System.Speech assembly also includes the ability to recongize speech. This will require the Speech Recongition software within Windows to run (see Control Panel | Ease of Access | Speech Recognition.

To enable your application to use speech recognition you need to execute the following

SpeechRecognizer recognizer = new SpeechRecognizer();

Just executing this will allow the speech recognition code (on the focused application) to do things like execute button code as if the button was pressed. You can also hook into events to use recogonised speech within your application, for example using

SpeechRecognizer recognizer = new SpeechRecognizer();
recognizer.SpeechRecognized += (sender, args) => 
{ 
   input.Text = args.Result.Text; 
};

gitconfig, the where and how’s

The git configuration files are stored at three different levels.

Local are stored within the cloned repository’s .git folder and the file is named config.

Global is stored in a file with no name and with the extension .gitconfig. It’s stored in your home directory. On Windows this can be confusing especially if the home directory is in a roaming profile. For example, normally we’d find the it in c:\users\your-user-name, however if you have a roaming profile then you’ll need to check

HOME="$HOMEDRIVE$HOMEPATH"

So for example this might end up as H:\

System is stored as gitconfig (filename but no extension). In the case of a Windows OS, this will be in C:\Program Files\Git\mingw64\etc, further configuration data may be found in the config file (filename but no extension) within C:\ProgramData\Git.

Scope

The scope of these files is as follows, local overrides global options and global overrides system.

List the configurations

If you execute the command below, you’ll see a list of all the configuration along with the location of the files used.

git config --list --show-origin

Note: beware if passwords are stored in the configuration then these will be displayed when you run this command.

The command does not show you what’s “actually” being used for configuration from the current repo. so much as “here’s all the configuration values”, so you’ll need to look at the scope of each file to determine what values are currently being used by git.

We can also use the same command along with the switch –local, –system and –global to list the files used along with the configuration used.

Further reading

git-config
git config (Atlassian)

A .NET service registered with Eureka

To create a .NET service which is registered with Eureka we’ll use the Steeltoe libraries…

Create an ASP.NET Core Web Application and select WebApi. Add the Steeltoe.Discovery.Client NuGet package, now in the Startup.cs within ConfigureServices add the following

services.AddDiscoveryClient(Configuration);

and within the Configure method add

app.UseDiscoveryClient();

We’re not going to create any other code to implement a service, but we will need some configuration added to the appsettings.json file, so add the following

"spring": {
    "application": {
      "name": "eureka-test-service"
    }
  },
  "eureka": {
    "client": {
      "serviceUrl": "http://localhost:8761/eureka/",
      "shouldRegisterWithEureka": true,
      "shouldFetchRegistry": false
    },
    "instance": {
      "hostname": "localhost",
      "port": 5000
    }
  }

The application name is what’s going to be displayed within Eureka and visible via http://localhost:8761/eureka/apps (obviously replacing the host name and ip with your Eureka server’s).