Express server and cors

If we need to handle CORS within Express, we can simply write the following (we’ll assume you’ve also added express to your packages)

  • yarn add cors

Now our usual Express code with the addition of CORS support

import express from "express";
import cors from "cors";

const app = express();
const server = http.createServer(app);

app.use(cors());

const port = 4000;
server.listen(port, 
  () => console.log(`Server on port ${port}`)
);

Checking for known vulnerabilities in our node packages etc. with retire.js

I noticed, a while back, how github has some fabulous tooling which runs across our repos. on our maven and node packages (and probably more). I wanted to have something similar hooked into my CI/CD pipeline for my React and Node projects.

There’s several solutions, but the one I am talking about here is retire.js. There’s several ways to run retire.js, I’m going to concentrate on running it from the command line.

So first off you need to install retire.js, either globally or in my case I’m going to add to the dev packages

  • yarn add -D retire

You may also need to run the following if you get an error saying it’s missing

  • yarn add -D regexp-tree

Now we can simply run yarn retire from our project’s folder. Without any arguments this will run both JavaScript and NPM checks.

In my case I got three vulnerabilities listed for jquery, but as they’re primarily used as part of webpack dev server (in my case) I don’t really need/want them reported as they’re outside of my hands and hopefully webpack server will update them in forthcoming releases, so I want to ignore these for now. To ignore such reports create a file named .retireignore.json (other variants exists of this file), now add the following

[
  {
    "component": "jquery",
    "justification": "Used by webpack dev server"
  }
]

In this example I’ve ignored all issues around jquery, but that may be too course and mean we do not catch other possible issues around jquery usage, so we can instead add identifiers and list specific issues to ignore, for example

[
  {
    "component": "jquery",
    "identifiers": { "issue": "2432" },
    "justification": "CORS issue, we only worth within the intranet"
  }
]

Note: from what I could tell (although not 100% on this) you can only ignore single issues, even though the use of the plural “identifiers”. I would assume if you’re going to ignore multiple issues then it’s probable you would just ignore all issues for a specific version of a package.

We can ignore specific versions of packages using

[
  {
    "component": "jquery",
    "version": "3.3.1",
    "justification": "Used by webpack dev server"
  }
]

Pre-commit linting with husky and lint-staged

You’ve got your JavaScript or TypeScript project, you’ve added eslint and set-up all the rules and you’re happily coding, committing and pushing changes to GIT, the only problem is you’re ignoring your lint results or worse still, not even running lint prior to committing changes.

Enter, Husky and link-staged.

Note: I’m describing using Husky on a Windows machine, installation may vary depending on OS.

Let’s add them to our project using

  • yarn add -D husky lint-staged

Now, within package.json (after the scripts section is as good a place as any) add the following

"husky": {
  "hooks": {
    "pre-commit": "lint-staged"
  }
},
"lint-staged": {
  "*.{js,jsx,ts,tsx}": [
    "eslint"
  ],
  "*.{js,jsx,ts,tsx,json}": [
    "prettier --write",
    "git add"
  ]
},

Note: Husky also supports it’s own configuration files .huskyrc, .huskyrc.json or .huskyrc.js

Husky adds the ability to run applications/code on various GIT hooks. The most useful, from the perspective of this post, is the pre-commit hook as we want to stop any commits until all lint failures are fixed.

In the configuration, above, we call lint-staged when GIT commit is executed. The commit hook will run husky and husky will then run lint-staged with the previously defined configuration.

Lint-staged runs the linter (in this configuration, eslint) over our various staged files and also executes prettier against the same files (as well as JSON) adding these changes.

With these two tools set-up, we no longer have the option to forget to run the linter or ignore it as it’s now part of the commit process. Ofcourse we only fail a commit on staged files, so in situations where we add these applications to an existing project we’re not force to fix all linting issues until we attempt to actually stage and commit the files.

I’ve merged or rebased and don’t want to fix everything

Obviously all’s great until you find old code that may need to be merged or we rebase and now have staged files which fail the lint process and we don’t want to fix them.

In such cases, simple set the environment variable HUSKY_SKIP_HOOKS to true or 1 prior to running a git command, for example

HUSKY_SKIP_HOOKS=1 git rebase ...

See Skip all hooks (rebase).

Beware the formatting infinite fix then fail circle

One thing to be aware of, if you’re using prettier, as per the configuration above, make sure your eslint and prettier adhere to the same rules – initially I found that eslint would complain about some formatting (I cannot recall which at the moment) so would fail the commit, upon changing the offending code to pass eslint, the prettier command ran as part of lint-staged and changed the format back, meaning it then failed the lint and would fail the commit again.

However, running prettier and having it write changes in obviously a great way to (at least try) to enforce consistency on our code format.

Requesting permissions in Android with a Target SDK Version >= 23

Various Android permissions (classed as “dangerous”), within applications targeting Android SDK >= 23 now require “requesting”.

So for example, on older Android versions we simply set the permissions, such as those for accessing Bluetooth, within the AndroidManifest.xml (or via the editor within Visual Studio – I’m using Xamarin Forms for my application, hence Visual Studio). These look like this

<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
<uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" />
<uses-permission android:name="android.permission.BLUETOOTH" />
<uses-permission android:name="android.permission.BLUETOOTH_ADMIN" />

All works fine until you try to run your application (which requires these permissions) on an Android device with a >= 23 version. Instead, within the debug output you’ll see Security Exceptions around such permissions.

So along with our manifest, we now need to request these permissions within the MainActivity.cs we can write something like

private void CheckPermissions()
{
   ActivityCompat.RequestPermissions(this, new []
   {
      Manifest.Permission.AccessCoarseLocation,
      Manifest.Permission.AccessFineLocation,
      Manifest.Permission.Bluetooth,
      Manifest.Permission.BluetoothAdmin
   }, 1);
}

We might wish to first check whether a permission has already been granted, using code such as

if (ContextCompat.CheckSelfPermission(
   this, Manifest.Permission.AccessCoarseLocation) 
   != Permission.Granted)
{
   // request permission
}

On top of this, we probably want to either inform the user or handle failures to get the requested permissions (for example disabling such functionality), in such cases we override the OnRequestPermissionsResult method within the MainActivity.cs file (i.e. within the FormsAppCompatActivity class)

public override void OnRequestPermissionsResult(
   int requestCode, 
   string[] permissions, 
   Permission[] grantResults)
{
   // we may want to handle failure to get the permissions we require
   // we should handle here, otherwise call the base method
   base.OnRequestPermissionsResult(requestCode, permissions, grantResults);
}

See Permissions In Xamarin.Android, also see the Android documentation Permissions Overview.

Docker, spring boot and mongodb

I wanted to create a docker build to run a spring boot based application along with it’s mongodb database which proved interesting. Here’s what I fouand out.

Dockerfile

To begin with, we need to create a docker configuration file, named Dockerfile. This will be used to create a docker image which we will host a spring boot JAR. Obviously this will require that we create an image based upon a Java image (or create our own). So let’s base our image on a light weight open JDK 1.8 image, openjdk:8-apline.

Below is an example Dockerfile

FROM openjdk:8-alpine
MAINTAINER putridparrot

RUN apk update

ENV APP_HOME /home
RUN mkdir -p $APP_HOME

ADD putridparrot.jar $APP_HOME/putridparrot.jar

WORKDIR $APP_HOME

EXPOSE 8080

CMD ["java","-Dspring.data.mongodb.uri=mongodb://db:27017/","-jar","/home/putridparrot.jar"]

The above will be used to create our image, based upon openjdk:8-apline, we then run an update (in case its required) we create an environment variable for our application folder (we’ll simply install our application into /home, but it could be more specific, such as /home/putridparrot/app or whatever), we then create that folder.

Next we ADD our JAR, so this is going to in essence copy our JAR from our host machine into the docker image so that when we run the image it’ll also include the JAR within that image.

I’m also exposing port 8080 as my JAR will be exposing port 8080, hence when we interact with port 8080 docker will proxy it through to our JAR application.

Finally we add a command (CMD) which will run when the docker image is run. So in this case we run the executable JAR passing in some configuration to allow it to access a mongodb instance (which will be running in another docker instance.

Note: The use if the db host is important. It need not be named db but the name needs to be the same as we’ll be using within the upcoming docker-compose.yml file

Before we move onto the mongodb container we need to try to build our Dockerfile, here’s the commands

docker rmi putridparrot --force
docker build -t putridparrot .

Note: These commands should be run from the folder containing our Dockerfile.

The first command will force remove any existing images and the second command will then build the docker image.

docker-compose.yml

So we’ve created a Dockerfile which will be used to create our docker image but we now want to create a docker-compose file which will be used to run both our newly created image and then a mongodb image and by use of commands such as depends_on and the use of the name of our mongo service (which we used within the JAR execution command). Here’s the docker-compose.yml file

version: "3.1"

services:
  putridparrot:
    build: .
    restart: always
    ports: 
      - "8080:8080"
    depends_on:
      - db

  db:
    image: mongo
    volumes:
      - ./data:/data/db
    ports:
      - "27017:27017"
    restart: always

The first line simply sets the version of the docker-compose syntax, in this case 3.1. This is followed by the services which will be run by docker-compose. The first service listed is our JAR’s image. In fact we do not use the image, we rebuild it (if required) via the build command – this looks for a Dockerfile in the supplied folder (in this case we assume it’s in the same folder as the docker-compose.yml file). We then set up the port forwarding to the docker image. This service depends on a mongodb running, hence the depends_on option.

The next service is our mongodb image. As mentioned previously, the name here can be whatever you want, but to allow our other service connect to it, should be used within our JAR configuration. Think of it this way – this name is the hostname of the mongodb service and docker will handle the name resolution between docker instances.

Finally, we obviously use the mongo image, and we want to expose the ports to allow access to the running instance and also store the data from the mongodb on our host machine, hence allow it to be used when a new instance of this service is started.

Now we need to run docker-compose using

docker-compose up

If all goes well, this will then, possibly build a new image of our JAR, the will bring up the services. As the first service depends_on the second, it will in essence be executed once the mongodb service is up and running, obviously allow it to then connect to the database.

Adding TypeScript to Electron

So we’ve seen that Electron is basically a browser window with integrations which allows us to use JavaScript to interact with it, but as somebody who prefers the type safety features that come with TypeScript, obviously I’d want to integrate TypeScript as well.

If you’ve not got TypeScript installed globally then run

npm install --save-dev typescript

If we take our application from the Getting Started post, lets simply started by adding a tsconfig, just run the following from your project’s root folder

tsc --init

Now change our main.js to main.ts. We’ll obviously need a build step to transpile the TypeScript to JavaScript which can then be uses via Electron, so add the following to the scripts section in package.json

"build": "tsc"

you might also like to either rename the start script or add another script to both build/transpile and run the application, i.e.

"go": "tsc && electron ."

Obviously this will litter your code base with generated .js files, so it’s best to transpile our code to a folder of it’s own, in this case we’ll call it src. Just add the following to the tsconfig.json

"outDir": "./src",

Then change package.json “main” to the following

"main": "./src/main.js",

That’s all there is to it, now we can add some type checking to our code and write TypeScript.

Getting started with Electron

Let’s start by installing the latest version of electron using

  • npm i -D electron@latest

The Writing Your First Electron App goes through the process of setting up your first electron application, I’ll recreate some of these steps below…

  • Create a folder, mine’s test1
  • Run npm init, when asked for the entry point type main.js instead of index.js
  • Add main.js
  • Add index.html
  • Run npm install –save-dev electron
  • Add scripts section to package.json, i.e.
    "scripts": {
      "start": "electron ."
    }
    

Next up let’s add the following to the main.js file

const electron = require('electron');

const { app, BrowserWindow } = require('electron')

function createWindow () {
  // Create the browser window.
  let win = new BrowserWindow({
    width: 800,
    height: 600,
    webPreferences: {
      nodeIntegration: true
    }
  })

  // and load the index.html of the app.
  win.loadFile('index.html')
}

app.on('ready', createWindow)

As you can see, we require electron, along with the app object and BrowserWindow. Next we create a BrowserWindow and load the index.html into it, so let’s supply something for it to load.

Change the index.html to the following

<!DOCTYPE html>
<html>
  <head>
    <meta charset="UTF-8">
    <title>Hello World!</title>
    <!-- https://electronjs.org/docs/tutorial/security#csp-meta-tag -->
    <meta http-equiv="Content-Security-Policy" content="script-src 'self' 'unsafe-inline';" />
  </head>
  <body>
    <h1>Hello World!</h1>
    We are using node <script>document.write(process.versions.node)</script>,
    Chrome <script>document.write(process.versions.chrome)</script>,
    and Electron <script>document.write(process.versions.electron)</script>.
  </body>
</html>

This demonstrates HTML content as well as using JavaScript within it.

Now run npm start and view your first electron application.

Rust modules

Rust includes a module system for grouping code into logical groups, which may include structs, implementations, other modules etc. Module also give us the ability to manage module code’s visibility.

There’s a couple of ways for declaring our modules.

Option 1

Assuming we’ve used cargo init to create our project or simply laid out our code in the same way, then we’ll have a src folder and within that we’ll create a folder named math which will become our module name. Within math we add a file named mod.rs

src
  |--- math
    |--- mod.rs
  |--- main.rs

So here’s a very basic mod.rs file

pub fn add(a: i32, b: i32) -> i32 {
    a + b
}

and here’s the main.rs file

mod math;

fn main() {
    println!("{}", math::add(1, 2));
}

Option 2

The second option is to name the module file math.rs and this is stored at the same level as the main.rs file, i.e.

src
  |--- math.rs
  |--- main.rs

Nested Modules

We can also nest modules (modules within modules), for example

pub mod nested {
  pub fn add(a: i32, b: i32) -> i32 {
    a + b
  }
}

// in use 

mod math;

fn main() {
    println!("{}", math::nested::add(1, 2));
}

Svn to git migration

I’ve been moving some repositories from SVN to GIT, so for reference, here’s the basic steps…

Note: These steps to not handle changing the author names etc. For a good explanation of this, checkout Migrate to Git from SVN.

Step are, as follows (these steps also assume you’ve create a repository within GIT for you code to migrate to)

  • git svn clone your_svn_repo
  • cd into your newly created folder
  • git remote add origin your_git_repo
  • git push -u origin master

Debugging Rust in Visual Code

I’m using Visual Code a lot for Rust development, so it’d be good to be able to debug a Rust application within it. Simply change the settings.json file, i.e.

File | Preferences | Settings

from the settings.json and now add

"debug.allowBreakpointsEverywhere": true,

Now add the C/C++ Microsoft extension if you’ve not already added it to Visual Code.

Next up, select

Debug | Add Configuration...

and the option

C++ (Windows)

Here’s my resultant launch.json file

{
  "version": "0.2.0",
  "configurations": [
    {
      "name": "(Windows) Launch",
      "type": "cppvsdbg",
      "request": "launch",
      "program": "${workspaceFolder}/target/debug/test.exe",
      "args": [],
      "stopAtEntry": false,
      "cwd": "${workspaceFolder}",
      "environment": [],
      "externalConsole": false
    }
  ]
}

and that’s it, now add a break point to your application, select Debug | Start Debugging (F5) and you’re off and running in the debugger.