println! structs

So we’ve got ourselves a simply little struct

struct Point {
    x: i32,
    y: i32
}

We then decide that we’d like to output the current state of the Point using println!, so we write

fn main() {
    let p = Point { x: 20, y: 3 };

    println!("{}", p);
}

Running this will result in `Point` doesn’t implement `std::fmt::Display` and `Point` cannot be formatted with the default formatter. In fact we do not really need to implement std::fmt::Display, we can just annotate our struct with #[derive(Debug)] and then use the println! formatters (:? or :#?), for example

#[derive(Debug)]
struct Point {
    x: i32,
    y: i32
}

fn main() {
    let p = Point { x: 20, y: 3 };

    println!("{:?}", p);
}

The use of :? will result in the following output Point { x: 20, y: 3 } whereas :#? will display the values on lines of their own (a “prettier” formatter). Both :? and :#? are debug formatters, hence require the annotation #[derive(Debug)] or we can implement std::fmt::Debug, for example

impl std::fmt::Debug for Point {
    fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
        write!(f, "(x is {}, y is {})", self.x, self.y)
    }
}

For situations where we simply want to create our own custom display (not just for Debug), then, as per the original error `Point` doesn’t implement `std::fmt::Display`, we would need to implement the std::fmt::Display trait, i.e.

impl std::fmt::Display for Point {
    fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
        write!(f, "(x is {}, y is {})", self.x, self.y)
    }
}

This means we no longer requiring the annotation or special formatters, hence our full code will look like this

struct Point {
    x: i32,
    y: i32
}

impl std::fmt::Display for Point {
    fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
        write!(f, "(x is {}, y is {})", self.x, self.y)
    }
}

fn main() {
    let p = Point { x: 20, y: 3 };

    println!("{}", p);
}

and as you’d expect our output is now (x is 20, y is 3).

Rust constructors

Rust doesn’t have the concept of a constructor in the sense of C++, C#, Java etc. You create new data structures by simply using the following syntax

struct Point {
   x: i32,
   y: i32
}

let pt = Point { x: 10, y: 20 };

However, by convention you might create an impl to create/initialize your structures. Rust code, by convention suggests such functions be named new. For example

impl Point {
    pub fn new() -> Point {
        Point {
            x: 0, 
            y: 0
        }
    }
}

let pt = Point::new();

Ofcourse, we might declare parameters/arguments on the function just like any other functions.

Basics of unit testing in Rust

I’m messing around with some Rust code at the moment, so expect a few posts in the near future. In this post I’ve going to jump straight into unit testing in Rust.

You don’t need to have a dependency on any unit testing frameworks as Rust has a unit testing framework integrated within it.

Our unit tests can sit along side our existing code using the conditional compilation annotation #[cfg(test)]. Let’s create a simple example test, which assumed we have a Stack implementation to test

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn initial_state() {
        let s: Stack<i32> = Stack::new();
        assert_eq!(s.length, 0);
    }
}

The following line simply allows us to use code from the parent scope (i.e. allows us to use the Stack code).

use super::*;

Next up with have the #[test] annotation which (probably fairly obviously) declares the function initial_state to be a test function.

Ofcourse we need some form of assertation code, hence the assert_eq! macro.

We can also test whether our code panics by placed the #[should_panic] annotation after the #[test] annotation. This denotes that the system under test should panic (similar to exceptions in other languages).

Some times we need to ignore a test, in such cases we can use the #[ignore] annotation

Obviously we need run our tests, we can use cargo for this, simple run

cargo test

We can also run tests in parallel using the -test-threads option, for example if we want test to be run in parallel on two threads, we use

cargo test -- --test-threads=2

See further options around controlling how your tests are run controlling how your tests are run.

netstat

Note: This post is primarily on using netstat on Windows

I’ve been using netstat more lately to keep track on websocket’s being left open etc. and thought it worth creating a post regarding what things mean in nestat, as I’m bound to forget once all the code I’m working on is complete.

We’ll start with a few obvious things by looking at the switch/params available (as taken from netstat -h but included here for completeness)

NETSTAT [-a] [-b] [-e] [-f] [-n] [-o] [-p proto] [-r] [-s] [-t] [interval]

  • -h Display the help
  • -a Displays all connections and listening ports
  • -b Displays the executable involved in creating each connection. This option required elevated permissions, i.e. run as admin
  • -e Displays ethernet statistics (may be combined with -s)
  • -f Displays fully qualified domains names (FQDN) for foreign addresses
  • -n Displays address and port numbers in numerical form
  • -o Displays the owning process id (PID) associated with each connection
  • -p proto Shows connections for the protocol specified by the proto which may be TCP, UDP, TCPv6 or UDPv6. If used with the -s option proto may be IP, IPv6, ICMP, ICMPv6, TCP, TCPv6, UDP or UDCPv6.
  • -r Display the routing table
  • -s Displays per protocol statistics, by default statistics are shown for IP, IPv6, ICMP, ICMPv6, TCP, TCPv6, UDP and UDPv6. The -p option may be used to specify a subset.
  • -t Displays the current connection offload state
  • internal Redisplays the selected data/statistics every interval seconds. Press CTRL+C to stop

Possible states displays might be

  • CLOSED indicates the server has received an ACK signal from the client and is closed
  • CLOSE_WAIT indicates the server has received the first FIN signal, to acknowledge no more data is to be sent from the client, hence the connection is closing
  • ESTABLISHED indicates that the server received a synchronize, SYN, signal. This is only sent in the first packet from the client and the session is established
  • FIN_WAIT_1 indicates the connection is still active but not being used
  • FIN_WAIT_2 indicates the client just received acknoledgement of the first FIN signal from the server
  • LAST_ACK indicates the server is in the process of sending it’s own FIN signal
  • LISTENING indicates the server is ready to accept a connection
  • SYN_RECEIVED indicates the server just received a SYN signal from the client
  • SYN_SEND indicates the connection is open and active
  • TIME_WAIT indicates the client recognizes the connection as active but it’s not currently being used

Obviously if you’ve got grep installed you might prefer to pipe through grep to locate specific data, in PowerShell use Select-String, i.e. the following will run netstat in default mode and then pipe to Select-String which will report lines with port 4000. Not wholly useful in all situations

netstat | Select-String :4000

Within PowerShell on Windows 10 is the Get-NetTCPConnection cmdlet which give us the power of PowerShell for querying the resultant data, for example

Get-NetTCPConnection | ? {$_.State -eq "Listen"}

This will show all results with the state of Listen.

On Windows 7 (without grep) we can use Find and pipe results like this

netstate -an | Find ":4000"

Don’t forget you can pipe this again to find LISTENING state using

netstat -an | Find ":4000" | Find "LISTENING"

What do the results mean?

Obviously the protocol is listed along with the state (possible options listed previously), but we’ll often see local or foreign addresses such as 0.0.0.0 which means the address/port is listening (etc.) on all network interfaces. 127.0.0.1 is ofcourse your local host and processes are listening for connections from the PC itself (i.e. not network). If the address is your local network IP then the port is listening to connections for the local network.

Common use cases

I’m going to stick with netstat (over Get-NetTCPConnection) as this post is, after all, about netstat.

Which software is making a connection to the outside world?

netstat -b

Get a summary of the current number of bytes send/received etc.

netstat -e

this is not the this you might expect

this within TypeScript/JavaScript is not the same as you’re used to if you come from an OO language such as C#, Java, C++ or the likes.

Within these languages this is the internal reference to the instance of the object your methods are members of, within TypeScript/JavaScript this depends upon the current “execution context”. This is one of the reasons when writing code for event handler in React (for example) we need code such as

this.handleClick = this.handleClick.bind(this);

In JavaScript the runtime maintains a stack of execution contexts. As such functions (not part of a class) have access to this which will point to the global object, which in a browser would be window and via node is a special global object.

Yes for those coming from an OO language it would seem odd that functions outside of classes have access to this.

For example, running node index.js on this index.js file, will display Object [global], however if we add ‘use strict’; to the start of index.js this will be undefined.

function fn() {
    console.log(this);
}

fn();

If we now create a JavaScript class (either ECMAScript 2015) for example

class MyClass {
    constructor(arg1, arg2) {
        this.arg1 = arg1;
        this.arg2 = arg2;
    }
}

We’ll find that this no longer references the global object, instead the new keyword (when we create an instance of this class) causes the JavaScript runtime to create a new object assigned to this specific to the class. Hence if you console.log(this) from the class you’ll see this within a new execution context, scoped to the class.

Let’s return to our class and add a new method, so the MyClass code should look like this

class MyClass {
    constructor(arg1, arg2) {
        this.arg1 = arg1;
        this.arg2 = arg2;
    }

    output() {
        console.log(this);
    }
}

if we now execute the following

const mc = new MyClass("Scooby", "Doo");
mc.output();

as you’d expect, the out function logs the MyClass instance (shown below) to the console.

MyClass { arg1: 'Scooby', arg2: 'Doo' }

If, however we instead have the following code

const mc = new MyClass("Scooby", "Doo");
const fn = mc.output;
fn();

the fn() call, which is ultimately just a call mc.output() will output undefined for this. What’s happening is we’re actually executing the function outside of the class and this (in node) is now undefined.

We might think that C# etc. is wrapping this in a closure or the likes, whereas JavaScript is not. So how do we “bind” this to the new output function – the clue is in the use of the work “bind”. Adding the following to the constructor, now binds the this from the class to the method and now calling fn() will output the MyClass object.

this.output = this.output.bind(this);

Interestingly, as JavaScript function have a bind, call and apply functions on them, we could actually bind the function to a totally different instance of an object, hence if we created a totally different class and then bind the function to it, the output will display the new object, i.e.

class PointlessClass  {
}

and now in the MyClass constructor we have

this.output = this.output.bind(new PointlessClass());

Then, using either method of calling the output method or assigning to fn and calling outside of the class, we’ll get PointlessClass {} logged to the console.

If we go back to the code

const mc = new MyClass("Scooby", "Doo");
const fn = mc.output;
fn();

We can change this to bind (instead of the constructor) if we so wished, for example

const mc = new MyClass("Scooby", "Doo");
const fn = mc.output.bind(mc);
fn();

The above will now output the instance of MyClass as it’s this.

As previously mentioned a function can bind, call and apply. So bind sets the this on the method and every time the method is called it’s bound to MyClass. Call executes a method against the supplied this only for that single call as does apply i.e.

const fn = mc.output;

fn.call(mc);
fn();

the fn.call will output an instance of MyClass but the fn() will again output undefined (or the global execution context).

I mentioned call and apply did the same things, so what’s the difference? The difference is in the way arguments are passed to these functions, call takes an array of arguments whereas apply takes a set, other than that they do the same thing, immediately execute the method against the supplied this.

Finally, arrow (also known as fat arrow) functions are part ECMAScript 2015 and they are automatically bound to this, let’s assume we change our MyClass to

class MyClass {
    constructor(arg1, arg2) {
        this.arg1 = arg1;
        this.arg2 = arg2;
        this.output = () => console.log(this);
    }
}

Now if we use this function like we did earlier, which if you recall output undefined for this

const fn = mc.output;
fn();

with the arrow function we found this was set to the instance of MyClass it was declared within. So basically it appears to be already bound, in reality it’s better to think of the arrow functions as inheriting the this because unlike non arrow methods, we cannot rebind it’s this reference. For example, the following will still output the MyClass instance, not the new PointlessClass that we’ve attempt to bind to

const fn = mc.output.bind(new PointlessClass());
fn();
[/code[

Redux observable

In the previous post we looked at one way to handle application side effects, such as asynchronous loading of data using Redux sagas.

There’s absolutely nothing wrong with redux saga’s but there’s a few alternatives. The alternative we’re going to look at is redux-observable.

So what’s the big difference between redux-sagas and redux-observables? I’ve not run any performance or effeciency testing against the two so I’m going to solely comment on their usages. Saga’s use generator functions whilst Epic’s (the term used within redux-observable as something analogous to Saga in redux-saga) uses, well you guessed it, Observables, i.e. rxjs.

Instead of yield put etc. in a Saga, we return a Observable.create and call next to pass data to the redux store. I’ve been asked “which should I choose?” by other developers and there really isn’t a clear reason to choose one over the other (if I get chance to try to check performance etc. I may amend this post).

I would say, if you’re already including rxjs in your application or you have a good understanding of rxjs, then redux-observables will probably be the best choice. If you’ve not really got an understanding of rxjs or don’t wish to bring in a dependency on rxjs, then stick to sagas.

I could (and probably will) write a post on using rxjs, but to summarise – rxjs (Reactive Extensions) came originally from .NET and offered a push style paradigm for development along with better concurrency capabilities and composability in a declarative manner. Whilst rxjs is not an exact copy (i.e. it uses a different way to compose observable data) it does offer similar capabilities. When abused, Observables can be hard to understand, but the powerful nature of the functionality/operators you get makes them far more powerful than saga’s – but then again if you have a good library of functions you can implement similar functionality to rxjs in sagas.

Okay, enough talk, let’s write code. I’m going to layout things just like the redux-saga post (and yes, even copy and paste some text) to give a sort of comparison of writing the two.

Assuming you have a React application created, we need run the following

  • yarn add react-redux
  • yarn add redux-observable

To create a simple demo, we’ll change the App.js file to

import React from 'react';
import store from './store';
import { Fetch } from './rootReducer';

function App() {

  function handleDoFetch() {
    store.dispatch({type: Fetch});
  }

  return (
    <div className="App">
      <button onClick={handleDoFetch}>Do Fetch</button>
    </div>
  );
}

export default App;

So this will simply dispatch an action which will ultimately be handled by our observable. Before this happens let’s create a redux store and set up the redux observable middleware, here’s my store.js file

import { createStore, applyMiddleware, combineReducers } from "redux";
import { createEpicMiddleware } from 'redux-observable';
import rootReducer from "./rootReducer";
import rootEpic from "./rootEpic";

export const epicMiddleware = createEpicMiddleware();

const store = applyMiddleware(epicMiddleware)(createStore)(
  combineReducers({
    rootReducer,
  })
);

epicMiddleware.run(rootEpic);

export default store;

We don’t need to combineReducers as there’s only one, but it’s there for an example of setting up multiple reducers. Let’s now create a very basic reducer named rootReducer.js

export const Fetch = "FETCH";
export const FetchEnded = "FETCH_ENDED";

export default (state = {}, action) => {
  switch (action.type) {
    case FetchEnded:
      console.log("Fetch Ended");
      return {
        ...state,
        data: "Fetch Ended"
      }
    default:
      break;
  }   
  return state;
}

Notice we’ve got two actions exported, Fetch and FetchEnded but there’s nothing handling Fetch in this case. This is because redux middleware will in essence pass this through to the redux-observable we’re about to create. We could also handle Fetch here and still handle it also within the epic, the point being the epic (via the observable and ofType) is going to handle this action when it see’s it.

Now we’ve got everything in place, let’s put the final piece in place, the epic will be stored in rootEpic.js and here it is

import { Fetch, FetchEnded } from "./rootReducer";
import { Observable } from "rxjs/internal/Observable";
import { mergeMap } from "rxjs/operators";
import { ofType } from "redux-observable";

function rootEpic(
    action$,
    _state$,
    _dependencies) {
  
    return action$.pipe(
      ofType(Fetch),
      mergeMap(action => { 
        return Observable.create(o => {
            console.log("fetchData")
            o.next({ type: FetchEnded });  
        })
    }));
}

export default rootEpic;

Notice that the rootEpic function returns an Observable via Observable.create and it uses next to inform any subscribers (in this case the middleware) to changes of state. Obviously this example is stupidly simple in that it just dispatches FetchEnded to the subscriber(s).

It might be the observable calls next many time for different values but in this example we’ve kept things simple. Running the application will display a button and using the browser’s dev tools, when the button is pressed the Fetch action is detected by the epic and the returned pipe, which itself uses the observable which then dispatches a FetchEnded action, which is handled by the reducer.

As stated, our example is very simple but in a real world scenario this function could be acting as a websocket client and for every value returned would placed into the next function until cancelled or maybe an error occurred.

Another thing to be aware of is that whilst the rootEpic pipe will be created once (in our case when added to the middleware for example) the pip is called for each action through redux and hence we must filter the actions we want to handle using ofType and even actions dispatched via the observable will come through this epic.

Redux saga

Redux sagas allow us to handle application side effects, such as asynchronous loading of data.

Assuming you have a React application created, we need run the following

  • yarn add react-redux
  • yarn add redux-saga

To create a simple demo, we’ll change the App.js file to

import React from 'react';
import store from './store';
import { Fetch } from './rootReducer';

function App() {

  function handleDoFetch() {
    store.dispatch({type: Fetch});
  }

  return (
    <div className="App">
      <button onClick={handleDoFetch}>Do Fetch</button>
    </div>
  );
}

export default App;

So this will simply dispatch an action which will ultimately be handled by our saga. Before this happens let’s create a redux store and set up the redux saga middleware, here’s my store.js file

import { createStore, applyMiddleware, combineReducers } from "redux";
import createSagaMiddleware from "redux-saga";
import rootReducer from "./rootReducer";
import rootSaga from "./rootSaga";

export const sagaMiddleware = createSagaMiddleware();

const store = applyMiddleware(sagaMiddleware)(createStore)(
  combineReducers({
    rootReducer,
  })
);

sagaMiddleware.run(rootSaga);

export default store;

We don’t need to combineReducers as there’s only one, but it’s there for an example of setting up multiple reducers. Let’s now create a very basic reducer named rootReducer.js

export const Fetch = "FETCH";
export const FetchEnded = "FETCH_ENDED";

export default (state = {}, action) => {
  switch (action.type) {
    case FetchEnded:
      console.log("Fetch Ended");
      return {
        ...state,
        data: "Fetch Ended"
      }
    default:
      break;
  }   
  return state;
}

Notice we’ve got two actions exported, Fetch and FetchEnded but there’s nothing handling Fetch in this case. This is because redux middleware will pass this through to the redux-saga we’re about to create. We could also handle Fetch here and still handle it also within the saga, the point being the saga is going to handle this action when it see’s it.

Now we’ve got everything in place, let’s put the final piece in place, the saga will be stored in rootSaga.js and here it is

import { put, takeLatest } from 'redux-saga/effects'
import { Fetch, FetchEnded } from "./rootReducer";

function *fetchData() {
    console.log("fetchData")
    yield put({ type: FetchEnded });
}

function* rootSaga() {
    yield takeLatest(Fetch, fetchData);
}

export default rootSaga;

Notice that the rootSaga function is a function generator and it yields the result of a call to fetchData each time the Fetch action is detected.

It might be the fetchData yield’s many values or even sits in a loop yielding data but in this example we’ve kept things simple. Running the application will display a button and using the browser’s dev tools, when the button is pressed the Fetch action is detected by the saga and the fetchData function runs, which then in turn dispatches a FetchEnded action which is handled by the reducer.

As stated, our fetchData is very simple but in a real world scenario this function could be acting as a websocket client and for every value returned would yield each value within a while(true) loop or the likes until cancelled or maybe an error ocurred.

JavaScript generator functions

In C# we have iterators which might return data, such as a collection but can also yield return values allowing us to more dynamically return iterable data.

Yield basically means return a value but store the position within the iterator so that when the iterator is called again, execution starts at the next command in the iterators.

These same type of operations are available in JavaScript using generator functions. Let’s look at a simple example

function* generator() {
  yield "A";
  yield "B";
  yield "C";
  yield "D";
}

The function* denotes a generator function and it’s only in the generator function we can use the yield keyword.

When this function is called a Generator object is returned which adheres to the iterable protocol and the iterator protocol which basically means, the Generator acts like an iterator.

Hence executing the following code will result in each yield value being output, i.e. A, B, C, D

for(const element of generator()) {
  console.log(element);
}

TypeScript 3.7.2

TypeScript 3.7.2 has just been released.

First off you’re going to want to install it globally on your machine, hence run (obviously this will install the latest version so not specific to version 3.7.2)

npm install -g typescript

or if you want it locally to your project, ofcourse you can run

npm install -D typescript 

If you’re using VS Code and you want to set it up for this version (if it’s not already set for the latest) then press F1 and select Preferences: Open User Settings adding (or editing) the following (off of the root level of the JSON)

"typescript.tsdk": "node_modules\\typescript\\lib",

I’m not going go through the new features exception to say we’ve now got optional chaining the ?. operator from C# and Nullish coalescing using the ?? operator.

Creating a nuget package (revisited)

I’ve covered some of this post previous in my post Creating Local Packages with NuGet, but I wanted to drill down a little more into this here.

Part of the reason for a revisit is that I wanted to look at creating the relevant .nuspec for a couple of projects I’ve put on github and I wanted to cover the .nuspec files in a little more depth.

Before we start

Before we start you’ll probably want to grab the latest nuget.exe from Available NuGet Distribution Versions. The current recommended version is v4.9.4 and this should be placed in your project folder (or ofcourse wherever you prefer in the path).

Generating a nuspec file

We can now run

nuget spec

in my case, this produced the following

<?xml version="1.0"?>
<package >
  <metadata>
    <id>Package</id>
    <version>1.0.0</version>
    <authors>PutridParrot</authors>
    <owners>PutridParrot</owners>
    <licenseUrl>http://LICENSE_URL_HERE_OR_DELETE_THIS_LINE</licenseUrl>
    <projectUrl>http://PROJECT_URL_HERE_OR_DELETE_THIS_LINE</projectUrl>
    <iconUrl>http://ICON_URL_HERE_OR_DELETE_THIS_LINE</iconUrl>
    <requireLicenseAcceptance>false</requireLicenseAcceptance>
    <description>Package description</description>
    <releaseNotes>Summary of changes made in this release of the package.</releaseNotes>
    <copyright>Copyright 2019</copyright>
    <tags>Tag1 Tag2</tags>
    <dependencies>
      <dependency id="SampleDependency" version="1.0" />
    </dependencies>
  </metadata>
</package>

As you can see, it’s supplied the basics along with an example of a dependency. The dependencies are the package our project is dependent upon. The id can be found via the nuget website or from the nuget packages section within Visual Studio. The versions can also be found in the same way.

If we wish to add files to our nuspec we add a files section after the metadata end tag. For example

  </metadata>

   <files>
      <file src="" target="" />
   </files>

</package>

The src is the relative location of the files to add (maybe we’re adding a README.txt for example). The target is where the file should be copied to. Using content as the start of the target will added file to the content folder.

Naming conventions

The next thing I want to touch on is naming conventions, which I think becomes important if you’re intending the deploy on the NuGet site (as opposed to local to your organization or the likes).

The Names of Assemblies and DLLs discusses some possible conventions. Obviously the <Company>.<Component>.dll is a very sensible convention to adopt as we will need to ensure our assemblies are as uniquely named as possible to stay away from name clashes with others.

Obviously using such naming conventions will tend to push towards namespace etc. naming conventions also, so have a read of Names of Namespaces also.

Generating our NuGet package from a project

Before we look at the nuspec file itself, let’s cover the simple way of generating our NuGet package, as this might be all you need if the component/code you want to package is fairly self-contained.

Before we create the package, let’s create a bare bones .nuspec file because otherwise, the nuget tool will generate one along these lines

<?xml version="1.0"?>
<package xmlns="http://schemas.microsoft.com/packaging/2011/08/nuspec.xsd">
  <metadata>
    <id>PutridParrot.Collections</id>
    <version>1.0.0.0</version>
    <title>PutridParrot.Collections</title>
    <authors>PutridParrot</authors>
    <owners>PutridParrot</owners>
    <requireLicenseAcceptance>false</requireLicenseAcceptance>
    <description>Description</description>
    <copyright>Copyright © PutridParrot 2017</copyright>
    <dependencies />
  </metadata>
</package>

Note: the project I ran this against was PutridParrot.Collections.csproj

So lets take this and change it a little – create a .nuspec named after your project, i.e. mines PutridParrot.Collections.nuspec and paste the below, into it (change names etc. as you need).

<?xml version="1.0"?>
<package >
  <metadata>
    <id>Your Project Name</id>
    <version>1.0.0.0</version>
    <title>Your project title</title>
    <authors>Your Name</authors>
    <owners>Your Name</owners>
    <requireLicenseAcceptance>false</requireLicenseAcceptance>
    <description>Your Description</description>
    <releaseNotes>First Release</releaseNotes>
    <copyright>Copyright 2017</copyright>
    <tags>collections</tags>
  </metadata>
</package>

Note: The tags are used by the NuGet repos, so it’s best to come up with several (with a space between each) tags for your project.

Now we’ve got a nuspec file this will be embedded into the package by nuget.

From the command line, (easiest from the folder containing the project you want to package) run

nuget pack <project-name>.csproj

Note: as the package is zipped, just append .zip to it to open using File Explorer in Windows, so you can see how it’s laid out. Don’t forget to remove the .zip when relocating to local host of remote host.

Autogenerating parts of the nuspec from AssemblyInfo

We can actually tokenize some of our nuspec and have nuget use our project’s AssemblyInfo.cs file, see Replacement tokens.

This means, for example version might be written like this

<version>$version$</version>

and will have this value automatically replaced and the nupkg will be named with that same version.

Multiple files or how to use nuget pack with the csproj

Running nuget pack against a project is useful but what if you want to handle multiple projects and/or non-project files? Then we would be better off editing the nuspec file to pull in the files we want.

Here’s an example of the previous nuspec which now includes more than one version of the project’s DLL.

<?xml version="1.0"?>
<package >
  <metadata>
    <id>Your Project Name</id>
    <version>1.0.0.0</version>
    <title>Your project title</title>
    <authors>Your Name</authors>
    <owners>Your Name</owners>
    <requireLicenseAcceptance>false</requireLicenseAcceptance>
    <description>Your Description</description>
    <releaseNotes>First Release</releaseNotes>
    <copyright>Copyright 2017</copyright>
    <tags>collections</tags>
  </metadata>
  <files>
    <file src="bin\Release\PutridParrot.Collections.dll" target="lib\net40" />
    <file src="bin\Release\PutridParrot.Collections.dll" target="lib\netstandard1.6" />
  </files>  
</package>

Now run

nuget pack PutridParrot.Collections.nuspec

Notice we’re running nuget pack against our nuspec file instead and this will bring in two DLL’s and make them available to lib\net40 and lib\netstandard1.6 thus targetting two different .NET frameworks.

The following gives a list of valid Supported frameworks that we can assign.