Basics of KQL

In the previous two post we’ve looked at logging and using the TelemetryClient to send information to Application Insights. Application Insights offers a powerful query language (Kusto Query Language – KQL) for filtering logs.

We cannot possibly cover all options of KQL, so this post will just cover some of the basics and useful queries.

Tables

Application Insights supplied several tables as ways tracking events, information and various logging data.

The main tables are as follows

  • requests: Logs information regarding HTTP requests in your application.
  • dependencies: Tracks calls made to external services or databases.
  • exceptions: Logs exceptions within your application.
  • traces: Diagnostic log messages and traces from your application.
  • pageViews: Page views and user interactions within your web application.
  • customEvents: Custom events that are defined to track specific actions or user interactions.
  • metrics: Tracks performance metrics and custom metrics.
  • availabilityResults: Availability tests that check uptime and responsiveness of your application/
  • appExceptions: Like exceptions, specifically for application exceptions.
  • appMetrics: Like metrics, specifically for application metrics.
  • appPageViews: Like pageViews, specifically for application page views.
  • appPerformanceCounters: Performance counters for your application.
  • appSystemEvents: System level events within your application.
  • appTraces: Like traces, specifically for your application traces.
  • azureActivity: Azure activity logs.
  • browserTimings: Captures detailed timings information about the browser’s performance when loading web pages.

We can combine timespan, so 1d6h30m means 1 day, 6 hours and 30 minutes.

Get everything

We can get everything across all tables (see below for information on the tables) using

search *

Tables

We can just get everything from a table by running the query against a table, so for example for the traces table

traces

Filtering

Obviously returning all traces for example, is probably returning more rows than we want, so we can filter using the where keyword

traces
| where severityLevel == 3

Projections

traces
| where severityLevel == 3
| project timestamp, message

Aggregation

traces
| where severityLevel == 3
| summarize count() by bin(timestamp, 1h)

Ordering

requests
| where success == "false"
| summarize count() by bin(timestamp, 1h)
| order by bin(timestamp, 1h) desc

Samples

Get all the tables that have data

search * 
| distinct $table

Get all records within a table for the last 10 minutes

traces
| where timestamp > ago(1m)

The ago function allows us to use a timespan which includes

  • d: Days, for example 3d for three days
  • h: Hours, for example 2h for two hours
  • m: Minutes, for example 30m for thirty minutes
  • s: Seconds, for example 10s for ten seconds
  • ms: Milliseconds, for example 100ms for a hundred milliseconds
  • microsecond: Microseconds, for example 20microsecond for 20 microseconds
  • tick: Nanoseconds, for example 1tick for 100 nanoseconds

Summarizing each day’s request count into a timechart (a line chart). We also have options got a bar chart (barchart), pie chart (piechart), area chart (areachart) and scatter chart (scatterchart)

requests
| summarize request_count = count() by bin(timestamp, 1d)
| render timechart 

For some of the other chart types we need to supply difference information, so let’s look at a pie chart of the different requests

requests
| summarize request_count = count() by name
| render piechart    

We can get requests between two dates, including using of the now() function

requests
| where timestamp between (datetime(2025-02-14T00:00:00Z) .. now())

References

Kusto Query Language (KQL) overview

Tracking events etc. with Application Insights

In my previous post I looked at what we need to do to set-up and using Application Insights for our logs, but we also have access to the TelemetryClient in .NET (Microsoft also have clients for other languages etc.) and this allows us to send information to some of the other Application Insights, for example tracking events.

Tracking events is useful as a specific type of logging, i.e. we want to track, potentially, whether one of our application options is ever used, or to what extent it’s used. Imagine we have a button that runs some long running calculation – well if nobody ever uses it, maybe it’s time to deprecate and get rid of it.

Ofcourse we can just use logging for this, but the TelemetryClient allows us to capture data within the customEvents and customMetrics tables within Application Insights (we’re look at the available tables in the next post on the basics if KQL) and hence reduce the clutter of lots of logs.

Take a look at my post Logging and Application Insights with ASP.NET core. To see code for a simple test application. We’re going to simply change the app.MapGet code to look like this (note I’ve left the logging on in place as well so we can see all the options for telemetry and logging)

app.MapGet("/test", (ILogger<Program> logger, TelemetryClient telemetryClient) =>
{
    telemetryClient.TrackEvent("Test Event");
    telemetryClient.TrackTrace("Test Trace");
    telemetryClient.TrackException(new Exception("Test Exception"));
    telemetryClient.TrackMetric("Test Metric", 1);
    telemetryClient.TrackRequest("Test Request", DateTimeOffset.Now, TimeSpan.FromSeconds(1), "200", true);
    telemetryClient.TrackDependency("Test Dependency", "Test Command", DateTimeOffset.Now, TimeSpan.FromSeconds(1), true);
    telemetryClient.TrackAvailability("Test Availability", DateTimeOffset.Now, TimeSpan.FromSeconds(1), "Test Run", true);
    telemetryClient.TrackPageView("Test Page View");

    logger.LogCritical("Critical Log");
    logger.LogDebug("Debug Log");
    logger.LogError("Error Log");
    logger.LogInformation("Information Log");
    logger.LogTrace("Trace Log");
    logger.LogWarning("Warning Log");
})
.WithName("Test")
.WithOpenApi();

As you can see, we’re injecting the TelemetryClient object and Application Insights is set up (as per my previous post) using

builder.Services.AddApplicationInsightsTelemetry(options =>
{
    options.ConnectionString = configuration["ApplicationInsights:InstrumentationKey"];
});

From the TelemetryClient we have these various “Track” methods and as you can no doubt summise, these map to

  • TrackEvent: maps to the customEvents table
  • TrackTrace: maps to the trace table
  • TrackException: maps to the exeptions table
  • TrackMetric: maps to the customMetrics table
  • TrackRequest: maps to the requests table
  • TrackDependency: maps to the dependencies table
  • TrackAvailability: maps to the availablilityResults table
  • TrackPageView: maps to the pageViews table

Telemetry along with standard logging to Application Insights gives us a wealth of information that we can look at.

Ofcourse, assuming we’re sending information to Application Insights, we’ll then want to look at features such as the Application Insights | Monitoring | Logs where we can start to query against the available tables.

Logging and Application Insights with ASP.NET core

Obviously when you’re running an ASP.NET core application in Azure, we’re going to want the ability to capture logs to Azure. This usually means logging to Application Insights.

Adding Logging

Let’s start out by just looking at what we need to do to enable logging from ASP.NET core.

Logging is included by default in the way of the ILogger interface (ILogger<T>), hence we can inject into our code like this (this example uses minimal API)

app.MapGet("/test", (ILogger<Program> logger) =>
{
    logger.LogCritical("Critical Log");
    logger.LogDebug("Debug Log");
    logger.LogError("Error Log");
    logger.LogInformation("Information Log");
    logger.LogTrace("Trace Log");
    logger.LogWarning("Warning Log");
})
.WithName("Test")
.WithOpenApi();

To enable/filter logging we have something like the following within the appsettings.json file

{
  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft.AspNetCore": "Warning"
    }
  },
}

The LogLevel, Default section sets the minimum logging level for all categories. So for example a Default of Information means only logging of Information level and above (i.e. Warning, Error and Critical) are captured.

The Microsoft.AspNetCore is a category specific logging in that it logs Microsoft.AspNetCore namespace logging using the supplied log level. Because we can configure by namespace we can also use categories such as Microsoft, System, Microsoft.Hosting.Lifetime. We can also do the same with our code, i.e. MyApp.Controllers, so this allows us to really start to tailor different sections of our application an what gets captured in the logs.

Logging Levels

There various logging levels are as follows

  • LogLevel.Trace: The most detailed level, use for debugging and tracing (useful for entering/existing methods and logging variables).
  • LogLevel.Debug: Detailed but less so than Trace (useful for debugging and workflow logging).
  • LogLevel.Information: Information messages at a higher level than the previous two levels (useful for logging steps of processing code).
  • LogLevel.Warning: Indicates potentially problems that do not warrant error level logging.
  • LogLevel.Error: Use for logging errors and exceptions and other failures.
  • LogLevel.Critical: Critical issues that may cause an application to fail, such as those that might crash your application. Could also include things like missing connection strings etc.
  • LogLevel.None: Essentially disables logging

Application Insights

Once you’ve created an Azure resource group and/or Application Insights service, you’ll be able to copy the connection string to connect to Application Insights from your application.

Before we can use Application Insights in our application we’ll need to

  • Add the nuget package Microsoft.ApplicationInsights.AspNetCore to our project
  • Add the ApplicationInsights section to the appsettings.json file, something this
    "ApplicationInsights": {
      "InstrumentationKey": "InstrumentationKey=xxxxxx",
      "LogLevel": {
        "Default": "Information",
        "Microsoft": "Warning"
      }
    },
    

    We can obviously set the InstrumentKey in code if preferred, but the LogLevel is specific to what is captured within Application Insights

  • Add the following to the Program.cs file below CreateBuilder

    var configuration = builder.Configuration;
    
    builder.Services.AddApplicationInsightsTelemetry(options =>
    {
        options.ConnectionString = configuration["ApplicationInsights:InstrumentationKey"];
    });
    

Logging Providers in code

We can also add logging via code, so for example after the CreateBuilder line in Program.cs we might have

builder.Logging.ClearProviders();
builder.Logging.AddConsole();
builder.Logging.AddDebug(); 

In the above we start by clearing all currently logging providers, the we add a provider for logging to console and debug. The appsettings.json log levels are still relevant to which logs we wish to capture.

Using secrets in your appsettings.json via Visual Studio 2022 and dotnet CLI

You’ve got yourself an appsettings.json file for your ASP.NET core application and you’re using sensitive data, such as passwords or other secrets. Now you obviously don’t want to commit those secrets to source control, so you’re not going to want to store these values in your appsettings.json file.

There’s several ways to achieve this, one of those is to use Visual Studio 2022 “Manage User Secrets” option which is on the context menu off of your project file. There’s also the ability to use to dotnet CLI for this as we’ll see later.

This context menu option will create a secrets.json in %APPDATA%\Microsoft\UserSecrets\{Guid}. The GUID is stored within your .csproj in a PropertyGroup like this

<UserSecretsId>0e6abf63-deda-47fc-9a80-1cb56abaeead</UserSecretsId>

So the secrets file can be used like this

{
  "ConnectionStrings:DefaultConnection": "my-secret"
}

and this will map to your appsettings.json, that might look like this

{
  "ConnectionStrings": {
    "DefaultConnection": "not set"
  },
}

Now we can access the configuration in the usual way, for example

builder.Configuration.AddUserSecrets<Program>();

var app = builder.Build();
var connectionString = app.Configuration.GetSection("ConnectionStrings:DefaultConnection");
var defaultConnection = connectionString.Value;

When somebody else clones your repository you’ll need to recreate the secrets file, we could use _dotnet user-secrets_ for example

dotnet user-secrets set "ConnectionStrings:DefaultConnection" "YourConnectionString"

and you can list the secrets using

dotnet user-secrets list

Disable the Kestrel server header

We generally don’t want to expose information about the server we’re running our ASP.NET core application on.

In the case of Kestrel we can disable the server header using

var builder = WebApplication.CreateBuilder(args); 

builder.WebHost.UseKestrel(options => 
   options.AddServerHeader = false);

Protocols and Behaviours in Elixir

Protocols and Behaviours in Elixir are similar to interfaces in languages such as C#, Java etc.

Protocols can be thought of as interfaces for data whereas behaviours are like interfaces for modules, let’s see what this really means…

Protocols

A protocol is available for a data type, so let’s assuming we want a toString function on several data types but we obviously cannot cover all possible types that may be created in the future, i.e a Person struct or the likes. We can define a protocol which can be applied to data types, like this…

Let’s start by define the protocol

defprotocol Utils do
  @spec toString(t) ::String.t()
  def toString(value)
end

Basically we’re declaring the specification for the protocol using the @spec annotation. This defines the inputs and outputs, taking any params the after the :: is the return type. Next we define the function.

At this point we have now implementations, so let’s create a couple of implementations for a couple of the standard types, String and Integer

defimpl Utils, for: String  do
  def toString(value), do: "String: #{value}"
end

defimpl Utils, for: Integer  do
  def toString(value), do: "Integer: #{value}"
end

The for is followed by the data type supported by this implementation. So as you can see, we have a couple of simple implementation, but where protocols become more important is that we can now define the toString function on other types, let’s assume we have the Person struct from a previous post

defmodule Person do
  @enforce_keys [:firstName, :lastName]
  defstruct [:age, :firstName, :lastName]

  def create() do
    %Person{ firstName: "Scooby", lastName: "Doo", age: 30 }
  end
end

and we want to give it a toString function, we would simply define a new implementation of the protocol for the Person data type, like this

defimpl Utils, for: Person  do
  def toString(value), do: "Person: #{value.firstName} #{value.lastName}"
end

Now from iex or your code you can do sometihing like this

scooby = Parson.create()
Utils.toString(scooby)

and you’ve got toString working with the Person type.

Behaviours

Behaviours are again similar to interfaces but are used to define what a module is expected to implement. Let’s stick with the idea of a toString function which just outputs some information about the module that’s implementing it, but this time we’re expecting a module to implement this function, so we declare the behaviour as follows

defmodule UtilBehaviour do
  @callback toString() :: String.t()
end

We use the @callback annotation to declare the expected function(s) and @macrocallback for macros. As per the protocol we give the signature of the function followed by :: and the expected return type.

Now to implement this, let’s again go to our Person struct (remember this version of toString is just going to output some predefined string that represents the module)

defmodule Person do
  @behaviour UtilBehaviour

  @enforce_keys [:firstName, :lastName]
  defstruct [:age, :firstName, :lastName]

  def create() do
    %Person{ firstName: "Scooby", lastName: "Doo", age: 30 }
  end

  def toString() do
    "This is a Person module/struct"
  end
end

Now our module implements the behaviour and using Person.toString() outputs “This is a Person module/struct”.

We can also use the @impl annotation to ensure that you explicitly define the behaviour being implement like this

@impl UtilBehaviour
def toString() do
  "This is a Person module/struct"
end

This @impl annotation tells the compiler explicitly what you’re implementing, this is just an aid to development by making it clear what’s implementing what. If you use @impl once you have to use it on every behaviour.

CSS units of measurement

CSS offers several types of units of measurement as part of your web design, i.e. for font sizes, spacing etc.

px (pixels)

If you’ve come from any other UI development you’ll probably be used to using pixels to define window sizes etc. Pixels allow us to specify positioning and sizes specifically, however these are not scalable, i.e. a 10px * 10px button might look fine on a lower resolution monitor but for a high resolution look tiny.

1px = 1/96th of an inch

cm

Centimetres, and absolute measurement, 1cm equals 37.8px, which equals 25.2/64in

mm

Millimeters, 1mm = 1/10th of 1 cm

Q

Quarter-millimeteres, 1Q = 1/40th of a cm

in

Inches, 1 in = 2.54cm = 96px

pc

Picas, 1pc = 1/6th of an inch

pt

Points, 1pc = 1/72nd of an inch

em

This is a unit which is relative to the font size of the parent element. So for example 3em will be twice the size of the parent element’s font size. In other words let’s assume the font size is 32px then 3em would be 3 * 32px, i.e. 96px.

rem (root em)

This is similar (as the name suggests) to em, but it’s relative to the root HTML font size. Otherwise we can calculate things as per em, but using the root font size not the parent element.

Collections in Elixir

Disclaimer: I’m going through some old posts that were in draft and publishing one’s which look relatively complete in case they’re of use: This post may not be 100% complete but does give a good overview of Elixir collections.

Lists in Elixir are implemented as linked lists which handle handle different types.

[3, "Three" :three]

Prepending to a list is faster than appending

list = [3, "Three" :three]
["pre" | list]

Appending

list = [3, "Three" :three]
list ++ ["post"]

List concat

[3, "Three" :three] ++ ["four", :4, 4]

List subtraction,

[2] -- [2.0]

Head and tail

hd [3, "Three" :three]
tl [3, "Three" :three]

Pattern matching

We can split the head an tail using Z

[head | tail] = [3.14, :pie, "Apple"]

The equivalent of a dictionary known as keyword lists in Elixir

[foo: "bar", hello: "world"]
[{:foo, "bar"}, {:hello, "world"}]

Keys can be atoms, keys are ordered and do not have to be unique

Maps

Unlike keyword lists they allows keys of any type and are unordered the syntax for a ,ap is %{}

map = %{:foo => "bar", "hello" => :world}

SQL Server and IDENTITY_INSERT

I’m creating an SQL script to seed my SQL Server database, the tables include a primary key with autoincrement set (i.e. IDENTITY(1,1)). Inserting data within supplying a primary key works fine, but if you decide to seed the primary key data as well then I need to make a couple of changes to my SQL script.

Why would I want to seed the primary key if it’s autoincrementing, you may ask. The reason is that I intend to also seed some of the relationship data as well and this ofcourse means I already have the table’s primary key value (because I set it) and thus can easily create the relationship links.

What you need to do is wrap your INSERTs for a specific table in, so for example below we have a Country table and the primary key is Id.

SET IDENTITY_INSERT [dbo].[Country] ON

INSERT INTO [dbo].[Country] ([Id], [Name]) 
VALUES (1, 'Australia')

SET IDENTITY_INSERT [dbo].[Country] OFF

Adding Playwright to a React web app.

Playwright is an automation testing framework for the web. Let’s add it to our React app. and demonstrate how to use it

Installation

  • Install Playwright
    yarn create playwright
    
  • You’ll be asked where to put your end-to-end tests, default is e2e so let’s stick with that
  • Next, you’ll be asked whether to add a GitHub actions workflow, default is N< but I want them, so selected Y
  • Now you’re asked whether to install Playwright browsers, default is Y so let’s stick with that
  • Now Playwright is downloaded and installed

Writing Tests

Within the folder we set for our tests, I used the default e2e we can start adding our *.spec.ts test files, for example here’s a simple example test just to check the title on my web app.

import { test, expect } from '@playwright/test';

test.beforeEach(async ({ page }) => {
  // obviously needs changing to your deployed web app, but
  // fine for local testing
  await page.goto('http://localhost:3000/');
});

test('Ensure title is as expected', async ({ page }) => {

  await expect(page).toHaveTitle(/My Web App/);
  await page.getByText('End Sat Dec 31 2022').click();
});

In the above we simple create a test and using playwright we automate testing of the web app.

Now to run this, as I’m using React, add to package.json the following to the scripts section

"playwright_test": "playwright test",
"playwright_report": "playwright show-report",

Now we can run yarn playwright_test to run the tests within e2e or whatever your test folder was named.