Scheduled Azure Devops pipelines

I wanted to run some tasks once a day. The idea being we run application to check from any drift/changes to configuration etc. Luckily this is simple in Azure devops.

We create a YAML pipeline with no trigger and create a cronjob style schedule instead as below

trigger: none

schedules:
- cron: "0 7 * * *"
  displayName: Daily
  branches:
    include:
    - main
  always: true

pool:
  vmImage: 'ubuntu-latest'

steps:
- task: UseDotNet@2
  inputs:
    packageType: 'sdk'
    version: '10.x'

- script: dotnet build ./tools/TestDrift/TestDrift.csproj -c Release
  displayName: Test for drift

- script: |
    dotnet ./tools/TestDrift/bin/Release/net10.0/TestDrift.dll
  displayName: Run Test for drift

- task: PublishTestResults@1
  inputs:
    testResultsFormat: 'JUnit'
    testResultsFiles: ./tools/TestDrift/bin/Release/net10.0/drift-results.xml
    failTaskOnFailedTests: true

In this example we’re publishing test results. Azure devops supports several formats, see the testResultsFormat variable. We’re just creating an XML file named drift-results.xml with the following format


<testsuite tests="0" failures="0">
  <testcase name="check site" />
  <testcase name="check pipeline">
    <failure message="pipeline check failed" />
  </testcase>
</testsuite>

In C# we’d do something like

var suite = new XElement("testsuite");
var total = GetTotalTests();
var failures = 0;

var testCase = new XElement("testcase",
   new XAttribute("name", "check pipeline")
);

// run some test
var success = RunSomeTest();

if(!success)
{
  failures++;
  testCase.Add(new XElement("failure",
    new XAttribute("message", "Some test name")
  ));
}

suite.Add(testCase);

// completed
suite.SetAttributeValue("tests", total);
suite.SetAttributeValue("failures", failures);

var exeDir = Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location)!;
var outputPath = Path.Combine(exeDir, "tls-results.xml");

File.WriteAllText(outputPath, suite.ToString());

Using one of the valid formats, such as the JUnit format, will also result in Azure pipeline build showing a Test tab with our test results listed.

Sending email via the Azure Communication Service

I’m wanting to send emails from an Azure function upon a call from my React UI. Azure has the Communications Service and Email Communication Service for this functionality.

In the Azure Portla

  • Create a Communication service resource to your resource group
  • Create an Email Communication resource to your resource group

Add a free Azure subdomain

If you now go to the Email Communication Service | Overview, you can “Add a free Azure subdomain”. This is a really quick and simply way to get a domain up and running but has some limitations of quotas that are less than if you use a custom domain. This said, let’s click the “1-click add” and create an Azure subdomain.

When completed you’ll see the Settings | Provision domains, where it should show a domain name, domain type of Azure subdomain and all status should be verified.

Add a custom domain

Before try anything out let’s cover the custom domain. We’ll assume you have a domain on a non Azure DNS, for example GoDaddy. In the Azure Email Communication Service | Overview, click the “Setup” button.

  • Enter your domain
  • Re-enter to confirm
  • Click the confirm button
  • Click the Add button
  • We now need to verify the domain, so click Verify Domain and copy the TXT value
  • In my instance my DNS supplier offers a “Verify domain” option but you can just as easily add a TXT type with value @ then add the copied TXT value OR use the “Verify Doman Ownership” button if one exists
  • Once validation has completed go to the Email Communication Service | Settings | Provision domains and you’ll notice SPF, DKIM and DKIM2 are not verified, i.e. they’ll show “Configure”
  • Click on “Configure” in the SPF (any will do) this will show configuration for SPF, DKIM and DKIM2
  • For SPF, copy the SPF value, go to your DNS supplier and create a new DNS record of type TXT, a name of @ and paste your value into the value
  • For DKIM, copy the DKM record name, go to your DNS supplier and create a new DNS record of type CNAME, paste the record name into the name of your record and then copy the DKIM value from Azure into the CNAME value (if you have an options for Proxy, set to DNS only)
  • Finally, for DKIM2, copy the DKM2 record name, go to your DNS supplier and create a new DNS record of type CNAME, paste the record name into the name of your record and then copy the DKIM2 value from Azure into the CNAME value (if you have an options for Proxy, set to DNS only)
  • Go back to the Azure SPF configuration and click Next then Done

Verification can take some time, but when completed your custom domain should show Domain Status, SPF Status, DKIM status and DKIM2 status all Verified.

Connecting to the Communication Service

We’ve configured our domains, now we want to connect the domains to the “Communication Service” that you created earlier.

  • From the “Communication Service” go to Email | Domains
  • Click on Connect domains
  • Select your subscription, resource group, the your email service and finally the verified domain you wish to use – you can add multiple verified domains, so for example a custom domain and your Azure free subdomain.

Now all that’s left is to test the email, so

  • From the “Communication Service” go to Email | Try Email
  • Select the domain
  • Select your sender
  • Enter one or more recipients
  • I’ll leave the rest as default
  • If all fields are correct a “Send” button will appear, click it to send the email.

Whilst trying the email out you’ll have noticed the source code on the right – this gives you the code to place in your Azure function or other services.

Code

Here’s an example of the code generated via “Try Email”

using System;
using System.Collections.Generic;
using Azure;
using Azure.Communication.Email;

string connectionString = Environment.GetEnvironmentVariable("COMMUNICATION_SERVICES_CONNECTION_STRING");
var emailClient = new EmailClient(connectionString);


var emailMessage = new EmailMessage(
    senderAddress: "DoNotReply@<from_domain>",
    content: new EmailContent("Test Email")
    {
        PlainText = @"Hello world via email.",
        Html = @"
		<html>
			<body>
				<h1>
					Hello world via email.
				</h1>
			</body>
		</html>"
    },
    recipients: new EmailRecipients(new List<EmailAddress>
    {
        new EmailAddress("<to_email>")
    }));
    

EmailSendOperation emailSendOperation = emailClient.Send(
    WaitUntil.Completed,
    emailMessage);

ReturnType and Parameters in Typescript

Typescript has a couple of types which are useful for describing types when none are strictly specified.

Let’s assume we have this simple function, which takes a string parameter and returns

function getData(key: string) {
   return { key, firstName: "Scooby", lastName: "Doo" }
}

Using ReturnType creates a type that matches the getData return. i.e. { key: string, firstName: string, lastName: string } whereas the Parameters will be a tuple [key: string]

type T = ReturnType<typeof getData>;
type T = Parameters<typeof getData>;

Sentiment analysis using Python and TextBlob

Let’s create a really simple FastAPI with, create yourself our app file (app.py) and requirements file (requirements.txt).

We’re going to use TextBlob to process our text.

In the requirements.txt add the following

fastapi
uvicorn
textblob

In the app.py add the following imports

from fastapi import FastAPI
from textblob import TextBlob

Now let’s create the FastAPI and a POST endpoint named sentiment, the code should look like this

@app.post("/sentiment")
def analyze_sentiment(payload: dict):
    text = payload["text"]
    blob = TextBlob(text)
    polarity = blob.sentiment.polarity
    subjectivity = blob.sentiment.subjectivity
    return {
        "polarity": polarity,
        "subjectivity": subjectivity
    }

Don’t forget to run pip install

pip install -r requirements.txt

or if you’re using PyCharm, let this install the dependencies.

Run the app using

uvicorn app:app --reload

Note: as we’re using FastAPI we can access the OpenAPI interface using http://localhost:8000/docs

Now from curl run

curl -X POST http://localhost:8000/sentiment -H "Content-Type: application/json" -d '{"text": "I absolutely love this!"}'

and you’ve see a result along the following lines

{"polarity":0.625,"subjectivity":0.6}

Polarity is within the range [-1.0, 1.0], where -1.0 is a very negative sentiment, 0, neutral sentiment and 1.0 very positive sentiment. Subjectivity is in the range [0.0. 1.0] where 0.0 is very objective (i.e. facts or neutral statements) and 1.0 is very subjective (i.e. opinions, feelings or personal judgement).

Pick and Omit in Typescript

Pick and Omit are used to pick fields from a type to create a new type or, in the case of Omit, returns a type with supplied fields omitted.

For example, if we have a simple type such as this

type Person = {
  firstName: string
  lastName: string
  age: number
}

We might wish to create a new type based upon the Person type but with only the first name. We can use Pick to pick the fields like this

function pickSample(person: Pick<Person, "firstName">): string {
  return `Hello, ${person.firstName}!`
}

We can do the opposite using Omit, hence excluding fields

function omitSample(person: Omit<Person, "age">): string {
  return `Hello, ${person.firstName} ${person.lastName}`;
}

CSS functions

CSS functions extend CSS to give it a more “programming language” set of features, i.e. we can create functions, with parameters, even add type safety and return values.

Let’s start by looking at the basic syntax on a simple function which returns a given value depending on the “responsive design” size.

@function --responsive(--sm, --md, --lg: no-value) {
  result: var(--lg);

  @media(width <= 600px) {
    result: var(--sm);
  }
  @media(width > 600px) and (width <= 800px) {
    result: var(--md);
  }
  @media(width > 800px) {
    result: var(--lg);
  }
}

What’s happening here is the function is declared with the @function and the name (in this case responsive) is prefixed with — as are any parameters. Hence we have three parameters, the first is what’s return if the width is <= 600px and so on. The result: is not quite equivalent to a return as it does not shortcut and return a value, instead if you set the result: later then the last result is used as the “returned value”.

Here’s an example of us setting a 200px square with different colours upon the different break points

div {
  width: 200px;
  height: 200px;
  background: --responsive(
    blue,
    green,
    red);
}

We can also supply default values to a function, for example we could set a default “no-value” using

@function --responsive(--sm, --md, --lg: no-value)

Here’s an example of us setting other defaults with values

@function --responsive(--sm: blue, --md: green, --lg: red)

Interestingly we can also make things type safe, for example let’s set each parameter as being a color and the return value also being of type color

@function --responsive(--sm <color>: blue, --md <color>: green, --lg <color>: red) returns <color>

We can also supply multiple types, so let’s assume we want a –opacity function where the amount can be a percentage or number, then we might write something like

@function --opacity(--color, --opacity type(<number> | <percentage>): 0.5) returns <color> {
  result: rgb(from var(--color) r g b / var(--opacity));
}

and in usage

div {
  width: 200px;
  height: 200px;
  background: --opacity(blue, 80%);
  /* background: --opacity(blue, 0.3); */
}

Auto-discovery using Vite (and React)

I’m messing around with a Shell application in React and wanting to load information from apps/components that are added to the shell dynamically, i.e. via auto-discovery.

Now, we’re using React for this example so we can create React apps using Vite for what we will call, our components, as these can be loaded into React as components. The reason to create these as standalone apps would be for use to develop against.

So let’s assume I have a main application, the Shell app and this can get routes for functionality (our apps/components) which may get added later on in the development process, basically a pluggable type of architecture which just uses React components along with those components supplying routing information to allow us to plug them into the Shell.

To give it more context, I build a Shell application with Search and later on want to add a Document UI, so it’d be nice if the Document can be worked on separately and when ready, deployed to a specific locations where the Shell (upon refresh) can discover the new component and wire into the Shell.

Vite has a feature import.meta.glob which we can use to discover our routes.tsx that are deployed to a specific path, i.e.

const modules = import.meta.glob<Record<string, unknown>>(
  '../../apps/*/src/routes.tsx', { eager: true }
);

This will return keys for the file paths and values being the functions that import from the files or modules. For example it might locate components such as

apps/search/src/routes.tsx
apps/document/src/routes.tsx
apps/export/src/routes.tsx
apps/settings/src/routes.tsx

If eager is missing (or false) then Vite lazy imports the functions and you’d then need to call them yourself, with eager:true, Vite imports the modules immediately.

If we use something like this


import type { RouteObject } from 'react-router-dom';

const modules = import.meta.glob<Record<string, unknown>>(
  '../../apps/*/src/routes.tsx', { eager: true }
);

export const allAppRoutes: RouteObject[] = Object.values(modules)
  .flatMap((mod) => {
    const arr = Object.values(mod).find(
      (v): v is RouteObject[] => Array.isArray(v) && 
        v.every(item => typeof item === 'object' && 
           item !== null && 'path' in item && 'element' in item)
    );
    return arr || [];
  });

Then we could use the routes via the react-router-dom in App.tsx, like this

const baseRoutes = [
  { path: '/', element: <div>Welcome to the Shell</div> },
];

const routes = [
  ...baseRoutes,
  ...allAppRoutes,
];
return (
  <AuthContext.Provider value={auth}>
    <ShellLayout>
      <Suspense fallback={<div>Loading…</div>}>
        <Routes>
          {routes.map(({ path, element }) => (
            <Route key={path} path={path} element={element} />
          ))}
        </Routes>
      </Suspense>
    </ShellLayout>
  </AuthContext.Provider>
);

and finally here’s an example routes.tsx file for our components

import SearchApp from './App';

export const searchRoutes = [
  { path: '/search/*', element: <SearchApp /> },
];

Looking at security features and the web

Let’s take a look at various security features around web technologies, although I’ll we concentrating on their use in ASP.NET, but the information should be valid for other frameworks etc.

Note: We’ll look at some code for implementing this in a subsequent set of posts.

Authentication and Authorization

We’re talking Identity, JWT, OAuth, Open ID Connect.

Obviously the use of proper authentication and authorisation ensure only legitimate users have access to resources and forcing a least privilege and role based access ensures authenticated users can only access resources befitting their privileges.

OWASP risks mitigation:

  • A01 Broken Access Control and improper enforcement of permissions
  • A07 Identification and Authentication failures, weak of missing authentication flows
    • Data protection API (DPAPI) / ASP.NET Core Data Protection

      This is designed to protect “data at reset”, such as cookies, tokens CSRF keys etc. and providers key rotation and encryption services.

      OWASP risks mitigation:

      • A02 Cryptographic failures, weak or missing encryption of sensitive data
        • HTTPS Enforcement and HSTS

          This forces encrypted transport layers and prevents protocol downgrade attacks.

          OWASP risks mitigation:

          • A02 Cryptographic failures, sensitive data exposure
          • A05 Security misconfigurations, missing TLS or insecure defaults
            • Anti-Forgery Tokens (CSRF Protection)

              This prevents cross site request forgery by validation of user intent.

              OWASP risks mitigation:

              • A01 Broken access control
              • A05 Security misconfigurations
              • A08 Software and Data integrity failures such as session integrity
                • Input Validation and Model Binding Validation

                  This prevents malformed or malicious input from reaching the business logic.

                  OWASP risks mitigation:

                  • A03 Injection, such as SQL, NoSQL and command injections
                  • A04 Insecure design, lacking validation rules
                  • A05 Security misconfigurations
                    • Output Encoding

                      This prevents untrusted data from being rendered, for example covers things like Razor, Tag Helpers, HTML Encoders.

                      OWASP risks mitigation:

                      • A03 Injection
                      • A05 Security misconfigurations
                      • A06 Vulnerable and outdated components
                        • Security Headers

                          Covers things such as CSP, X-Frame-Options, X-Content-Type-Options, Referrer-Policy and mitigates XSS, click jacking, MIME sniffing and data leakage.

                          OWASP providers explicit guidance on recommended headers.

                          OWASP risks mitigation:

                          • A03 Injection, CSP reduces XSS
                          • A05 Security misconfigurations, missing headers
                          • A09 Security logging and monitoring failures, via reporting endpoints
                            • Rate limiting and throttling

                              This is included, but need to be enabled as ASP.NET built in middleware.

                              This prevents brute force, credential stuffing and resource exhaustion attacks.

                              OWASP risks mitigation:

                              • A07 Identification and Authentication failures
                              • A10 Server side request forgery (SSRF) limit abuse
                              • A04 Insecure design, lack of abuse protection
                                • CORS (Cross‑Origin Resource Sharing)

                                  This controls which origins can access API’s and prevents unauthorized cross-site API calls.

                                  OWASP risks mitigation:

                                  • A05 Security misconfiguration
                                  • A01 Broken access control
                                    • Cookie Security

                                      Protects session cookies from theft or misuse.

                                      OWASP risks mitigation:

                                      • A07 Identification and Authentication failures
                                      • A02 Cryptographic Failures
                                      • A01 Broken access control
                                        • Dependency Management

                                          When using third party dependencies via NuGet, NPM etc. we need to ensure libraries are patched and up to date.

                                          OWASP risks mitigation:

                                          • A06 Vulnerable and outdated components
                                            • Logging and Monitoring

                                              This covers things like Serilog, Application Insights and built-in logging etc.

                                              Used to detect suspicious activites, as well as support incident response.

                                              OWASP risks mitigation:

                                              • A09 Security Logging and monitoring failures
                                                • Secure deployment and configuration

                                                  This covers all forms of configuration, including appsettings.json, key vault, environment seperation etc.

                                                  Here we want to prevent secrets being exposed and enforce secure defaults.

                                                  OWASP risks mitigation:

                                                  • A05 Security misconfiguration
                                                  • A02 Cryptographic Failures

Generating QR code with Javascript

We can generate QR codes using the package qrcode. If we just want this as tooling, such as manually generating, then we can add the following

npm install qrcode --save-dev

Now update your packages.json with this script

"generate:qrs": "node ./scripts/generate-qrs.mjs",

As yo can see, add a scripts folder and create the file generates-qrs.mjs with the following code

import QRCode from 'qrcode';
import fs from 'fs/promises';
import path from 'path';

const outDir = path.join(process.cwd(), 'public', 'qrcodes');
await fs.mkdir(outDir, { recursive: true });

const sites = [
  { name: 'www.mywebsite.co.uk', url: 'https://www.mywebsite.co.uk' },
];

for (const site of sites) {
  const pngPath = path.join(outDir, `${site.name}.png`);
  const svgPath = path.join(outDir, `${site.name}.svg`);

  console.log(`Generating ${pngPath} and ${svgPath}`);

  // PNG (300x300)
  await QRCode.toFile(pngPath, site.url, { width: 300 });

  // SVG
  const svgString = await QRCode.toString(site.url, { type: 'svg' });
  await fs.writeFile(svgPath, svgString, 'utf8');
}

console.log(`QR codes generated in ${outDir}`);

In the above, the sites include a name, this is the name of the file ad we’ll create a file of type and with suffix PNG and SVG. The url is what’s embedded into the QR code.

Bringing markdown into your React app.

I’m working on a little website and I wanted to store some markdown files in a /content folder which allows a user to create their content as markdown without ever needing to change the website – think of this as a very simple file based CMS.

So the idea is we use git to handle versioning via revisions. The user can just change their markdown files for their content and the website will rebuild in something like GitHub actions and the content is then used by the site.

Depending on what we’re doing we’ve got a couple of options to include the markdown.

The first is simply import the markdown file using

import contactContent from '../../content/contact.md?raw';

export default function Contact() {
  return (
     <div>
        {contactContent}
     </div>
  );
}

The ?raw informs Vite or the likes to not try to parse this file, just supply it as raw text as a string.

This is fine if you’re just reading the test but if you want to render markdown you can use react-markdown.

Add the package (and the vite-plugin-markdown) using

npm install react-markdown
npm install vite-plugin-markdown

You’ll likely want these plugins as well

npm install rehype-raw
npm install remark-gfm

Now we can use the ReactMarkdown component like this

<ReactMarkdown 
  remarkPlugins={[remarkGfm]}
  rehypePlugins={[rehypeRaw]}
>
  {content}
</ReactMarkdown>

In this example we’d import the markdown as we did earlier, in this can we’re passing by props (potentially) as a content string.

In my case, I’m using Vite, and the import with the ?raw tells the Vite tooling to essentially treat this as a plain text file and it’ll get bundled and hence not visible in the dist folder. This is great because we can change the markdown and hot reload will update redisplay the changes.