Introduction to using Pex with Microsoft Code Digger

This post is specific to the Code Digger Add-In, which can be used with Visual Studio 2012 and 2013.

Requirements

This will appear in Tools | Extensions and Updates and ofcourse can be downloaded via this dialog.

What is Pex ?

So Pex is a tool for automatically generating test suites. Pex will generate input-output values for your methods by analysing the flow etc. and arguments required by the method.

What is Code Digger ?

Code Digger supplies an add-in for Visual Studio which allows us to select a method and generate input/outputs using Plex and display the results within Visual Studio.

Let’s use Code Digger

Enough talk, let’s write some code and try it out.

Create a new solution, I’m going to create a “standard” class library project. Older versions of Code Digger only worked with PCL’s but now (I’m using 0.95.4) you can go to Tools | Options in Visual Studio, select Pex’s General option and change DisableCodeDiggerPortableClassLibraryRestriction to True (if it’s not already set to this) and run Pex against non-PCL code.

Let’s start with a very simple class and a few methods

public static class Statistics
{
   public static double Mean(double[] values)
   {
      return values.Average();
   }

   public static double Median(double[] values)
   {
      Array.Sort(values);

      int mid = values.Length / 2;
      return (values.Length % 2 == 0) ?
         (values[mid - 1] + values[mid]) / 2 :
         values[mid];
   }

   public static double[] Mode(double[] values)
   {
      var grouped = values.GroupBy(v => v).OrderBy(g => g.Count());
      int max = grouped.Max(g => g.Count());
			
      return (max <= 1) ?
         new double[0] :
         grouped.Where(g => g.Count() == max).Select(g => g.Key).ToArray();
      }
   }
}

Now you may have noticed we do not check for the “values” array being null or empty. This is on purpose, to demonstrate Pex detecting possible failures.

Now, we’ll use the Code Digger add-in.

Right mouse click on a method, let’s take the Mean method to begin with, and select Generate Inputs / Outputs Table. Pex will run and create a list of inputs and outputs. In my code for Mean, I get two failures. Pex has executed my method with a null input and an empty array, both cases are not handled (as mentioned previously) by my Mean code.

If you now try the other methods you should see more similar failures but hopefully more successes with more input values.

Unfortunately (at the time of writing at least) there doesn’t appear to be an option in Code Digger to generate either unit tests automatically or save the inputs for my own unit tests. So for now you’ll have to manually write your tests with the failing inputs and implement code to make those work.

Note: I did find at one time the Generate Inputs / Outputs Table menu option missing, I disable and re-enabled the Code Digger Add-In and restarted Visual Studio and it reappeared.

Debugging a release build of a .NET application

What’s a Release Build compared to a Debug build

Release builds of a .NET application (by default) add optimizations, remove any debug code from the build, i.e. anything inside #if DEBUG is remove as well as Trace. and Debug. calls being removed. You also have reduced debug information. However you will still have .PDB files…

PDB files are generated by the compiler if a project’s properties allow for .PDB file to be generated. Simply check the project properties, select the Build tab and the Advanced… button. You’ll see Debug Info, which can be set to full, pdb-only or none. Obviously none will not produce any .PDB files.

At this point, I do not know the differences between pdb-only and full, if I find out I’ll amend this post, but out of the box, Release builds used pdb-only whilst Debug use full.

So what are .PDB files ?

Simply put – PDB files contain symbol information which allows us to map debug information to source files when we attach a debugger and step through the code.

Debugging a Release Build

It’s often the case that we’ll create a deployment of a Release build without the PDB files, this may be due to a desire to reduce the deployment foot print or some other reason. If you cannot or do not wish to deploy the PDB’s with an application then we should store them for a specific version of a release.

Before attaching our debugger (Visual Studio) we ned to add the PDB file locations to Visual Studio. So select the Debug menu, then Options and Settings. From here select Debugging | Symbols from the tree view on the left of the Options dialog. Click on the add folder button and type in (or paste) the folder name for the symbols for the specific Release build.

Now attach Visual Studio using Debug | Attach Process and the symbols will get loaded for the build and you can now step through the source code.

Let’s look at a real example

An application I work on deploys over the network and we do not include PDB files with it so we can reduce the size of the deployment. If we find a bug only repeatable “production” we cannot step through the source code related to the build without both a version of the code related to the release and without the PDB files for that release.

What we do is, when our continuous integration server runs, it builds a specific version of the application as a release build. We embed the source repository revision into the EXE version number. This allows us to easily check out the source related to that build if need be.

During the build process, we the copy the release build to a deployment folder, again using the source code revision in the folder name. We (as already mentioned) remove the PDB files (and Tests and other such files are also obviously removed). However we don’t just throw away the PDB’s, we instead copy them to a folder similarly named to the release build but with the name Symbols within the folder name (and ofcourse with the same version number). The PDB’s are all copied to this folder and now accessible if we need to debug a release build.

Now if the Release (or a production) build is executed and an error occurs or we just need to step through code for some other reason, we can get the specific source for the deployed version, direct Visual Studio to the PDB files for that build and now step through our code.

So don’t just delete your PDB’s store them in case you need to use them in the future.

Okay, how do we use the symbol/PDB files

So, in Visual Studio (if you’re using that to debug/step through your code). Obviously open your project with the correct source for your release build.

In the Tools | Options dialog, select the Debugging parent node in the dialog and then select Symbols or ofcourse just type Symbols into the search text box in Visual Studio 2013.

Now press the folder button and type in the location of your PDB files folder. Note that this option doesn’t have a folder browse option so you’ll need to type (or copy and paste) the folder name yourself.

Ensure the folder is checked so that Visual Studio will load the symbols.

Now attach the debugger to your release build and Visual Studio will (as mentioned) locate the correct symbols and attach them and then allow you to step through your source.

See Specify Symbol (.pdb) and Source Files in the Visual Studio Debugger for more information and some screen shots of the process just described.

Downloading a file from URL using basic authentication

I had some code in an application which I work on which uses Excel to open a .csv file from a URL. The problem is that user’s have moved to Excel 2010 (yes we’re a little behind the latest versions) and basic authentication is no longer supported without registry changes (see Office file types fail to open from server).

So, to re-implement this I needed to write some code to handle the file download myself (as we’re no able to change user’s registry settings).

The code is simple enough , but I thought it’d be useful to document it here anyway

WebClient client = new WebClient();
client.Proxy = WebRequest.DefaultWebProxy;
client.Credentials = new NetworkCredential(userName, password);
client.DownloadFile(url, filename);

This code assumes that the url is supplied to this code along with a filename for where to save the downloaded file.

We use a proxy, hence the proxy is supplied, and then we supply the NetworkCredential which will handle basic authentication. Here we need to supply the userName and password, ofcourse with basic authentication these will be passed as plain text over the wire.

Initial steps to setup a Prism application

I haven’t touched Prism in a while as I’ve been using Caliburn.Micro a lot, but decided to reaquaint myself with PRISM recently, so this give me the oppurtunity to create some posts on the basics of PRISM.

Creating the bare bones application

  • Create a new WPF Application
  • Open App.xaml and delete the StartupUri=”MainWindow.xaml” code as we’ll be creating the “shell” window in code
  • Either delete MainWindlow.xaml and then add a new WPF Window or rename the MainWindow to Shell (by convention the main window of the application is named Shell)
  • Add the following code to the XAML, inside the Shell’s grid (just so we have something to view when the application start’s up)
    <TextBlock Text="Shell Application" />
    
  • Using NuGet Install the PRISM package (mine’s version 5.0.0)
  • Create a new class named Bootstrapper, the contents depend upon the IoC container we want to use (discussed below). For now, change the Bootstrapper code to look like this, which is shared by both standard Prism IoC containers.
    public class Bootstrapper
    {
       protected override void InitializeShell()
       {
          base.InitializeShell();
          App.Current.MainWindow = (Window)Shell;
          App.Current.MainWindow.Show();
       }
    
       protected override DependencyObject CreateShell()
       {
          return null;
       }
    }
    

    Obviously we’ve not got a base class at this point so this will not compile, but these two methods are overidden for both Unity and Mef implementation.

  • Finally, for the shared code, open App.xaml.cs and add the following
    protected override void OnStartup(StartupEventArgs e)
    {
       base.OnStartup(e);
    
       Bootstrapper bootstrapper = new Bootstrapper();
       bootstrapper.Run();
    }
    

Okay, at this point we’ve got the basics in place but we need to create the bootstrapper which is used to create the shell via the IoC container, whether that be Unity, Mef or any other container setup to work with PRISM.

Using Unity

The Unity container requires the least work to get up and running.

  • Using NuGet add the package Prism.UnityExtensions (mine’s version 5.0.1) to the solution
  • Change the Bootstrapper code to derive from UnityBoostrapper
  • Change the CreateShell code to look like the following
    protected override DependencyObject CreateShell()
    {
       return Container.TryResolve<Shell>();
    }
    

Using MEF

Using MEF requires a little more work than Unity.

  • Using NuGet add the package Prism.MEFExtensions (mine’s version 5.0.0) to the solution
  • Change the Bootstrapper code to derive from MefBoostrapper
  • Add a reference to System.ComponentModel.Composition
  • Change the CreateShell code to look like the following
    protected override DependencyObject CreateShell()
    {
       return Container.GetExportedValue<Shell>();
    }
    
  • We now need to add our assembly to the MEF catalog, so add the following to the Bootstrapper class
    protected override void ConfigureAggregateCatalog()
    {
       base.ConfigureAggregateCatalog();
       AggregateCatalog.Catalogs.Add(new AssemblyCatalog(GetType().Assembly));
    }
    
  • Finally, we need to mark to Shell with the ExportAttribute so that MEF can locate it via Container.GetExportedValue. So open Shell.xaml.cs and place Export able the class thus
    [Export]
    public partial class Shell : Window
    {
        // code
    }
    

SQL select case

As mentioned in other posts on SQL. I’m currently working on a simple database to maintain information on the plants I’m planting (or looking to plant) this year. I’m going to use this application to write some “reminders” on using SQL.

Note: Currently I’m using SQL Server for this database, so I’ve only tried this on SQL Server.

Select

A simple example which classifies plants by their maximum height.

select Name, MaxHeight,
   case when MaxHeight > 100 then 'Tall'
        when MaxHeight > 50 and MaxHeight <= 100 then 'Medium'
        when MaxHeight > 0 and MaxHeight <= 50 then 'Small'
        else 'No Information'
        end as Height
from Plant



Results

[table “” not found /]

SQL Basics (Part 4)

In the previous SQL Basics posts, we’ve looked at querying our data, CRUD operations and transactions.

Let’s now look at creating databases, tables etc. These statements come under the acronym DDL (Database Definition Language).

Creating a database

We can create a database easily in SQL Server using

CREATE DATABASE Employee;

Now this is not accepted by every SQL type database, for example file based databases would probably not support this as the existence of the file is the database.

Creating Tables

To create a table on a database, we might use the following command in SQL Server

USE Employee;

This will ensure that we do not need to prefix our table definitions etc. with the database name. So we can create a table as

CREATE TABLE Person(
   Id int,
   FirstName nchar (20),
   LastName nchar (20),
   Age int
)

Note: the types supported by different databases may differ, but generally you’ll have numeric, string and date/time types (at least).

In the example Person table, we created a table with an Id, FirstName, LastName and Age. We’ve not created any primary key for this, so let’s now delete (or DROP) the table and try again with the following (see below for how to DROP the table)

CREATE TABLE Person(
   Id int IDENTITY(1, 1) not null,
   FirstName nchar (20) not null,
   LastName nchar (20) not null,
   Age int null,
   CONSTRAINT PK_Person PRIMARY KEY CLUSTERED
   (
      Id ASC
   )
)

Now we’ve also been more explicit in stating whether columns may be NULL (Note: NULL is the default for a column and some databases may not support explicit setting to NULL as it’s assumed NULL already). We’re using IDENTITY to create an auto incrementing ID. This may not be available in all SQL databases. Then finally we create a primary key constraint on the table named PK_Person.

DROP TABLE

To delete a table we call

DROP TABLE Person;

obviously all the data will be lost when we drop a table. We also may need to remove relationships to the Person table first, if any exist.

ALTER TABLE

As it’s name suggests, ALTER TABLE allows us to amend a TABLE. It may also be used as part of the TABLE creation process when foreign keys exist with tables which have not yet been created. i.e. we might create all the tables then ALTER TABLE to create the foreign keys etc.

Let’s just make a simple change to add a Nationality field to the table

ALTER TABLE Person 
ADD Nationality nchar(2) null

Whoops Nationality is a little on small size, so we need to alter this column, let’s use

ALTER TABLE Person 
ALTER COLUMN Nationality nchar(20)

Or we could remove the column thus

ALTER TABLE Person 
DROP COLUMN Nationality

SQL Basics (Part 3)

CRUD

In previous SQL Basics posts we’ve looked at querying the database, but we actually need to get some data into the database. So let’s go over the Create, Retrieve, Update, Delete commands.

Create

To create data in SQL we use the INSERT keyword, for example

INSERT INTO Plants (CommonName, Type_Id) VALUES ('Strawberry', 2)

This creates a new plant with the CommonName Strawberry and the Type_Id 2.

Retreieve

We’re not going to go through this again as we’ve dealt with the SELECT query already (this is the way we retrieve data using SQL).

Update

The UPDATE keyword is used to alter data with SQL.

Beware, if no WHERE clause is used then ALL rows will be updated

UPDATE Plants p SET p.CommonName = 'Strawberry' where p.Id = 123

Delete

DELETE is used to remove items from the database using SQL. We can delete one or more rows at a time but only from one table at a time, for example

DELETE FROM Plants

Beware you can easily delete ALL rows from a table if you do not specify a WHERE clause

Transactions

We’ve finished looking at the CRUD operations but you’ve probably noted some of the pitfalls of not correctly forming your DELETE or UPDATE queries. So let’s look a transactions which allow us to make changes which are not permanent until the transaction is completed.

First off, let’s look at the concept of ACID…

ACID stands for Atomic, Consistent, Isolated and finally Durable.

A transaction is said to be atomic in that it either happens or doesn’t happen, i.e. we cannot have a partially altered set of data. It’s consistent if the transaction leaves the database in a consistent state. It’s isolated in that it occurs in a serial way and finally it’s durable if it’s “permanently” stored, i.e. it’s not kept in memory but stored on disc so it will still be available after a reboot (for example).

Transaction syntax requires we tell the database we’re beginning a transaction, then we run our SQL command before either committing the transaction or rolling it back. For example

BEGIN TRANSACTION
DELETE FROM Plants
--ROLLBACK TRANSACTION
--COMMIT TRANSACTION

Now in the sample above I have started a transaction and deleted all Plants. If I now try to get all rows from the Plants table I’ll find they’re all gone. But uncommenting the ROLLBACK will allow us to cancel the transaction and return all the rows we seemed to have deleted.

Obviously had this been our intention then we could alternatively just uncomment the COMMIT transaction and commit out changes.

SQL Basics (Part 2)

JOINS

Joins allows us to aggregate multiple tables into a single result set. So for example we might want names of Plants from our plant database and information from other tables that relates to the plant.

As a more concrete example, we might have a foreign key from our Plant table which relates to the Plant Type (i.e. whether a plant is a Tree, Vegetable, Flower etc.)

CROSS JOIN

Before we look at an example of a JOIN as outlined above, let’s look at a CROSS JOIN. A CROSS JOIN is basically a JOIN of two tables without any where clause, so for example

select p.CommonName, pt.Name 
from Plants p, PlantTypes pt

This type of join is expensive in that it’s simply taking two tables and merging data, and in many ways it’s probably of little in terms of the result set produced. For example in my plant data at the moment I have 29 plants listed (not many I admit, but it’s early days). I also have 4 plant types. The result set of the CROSS join above is 4 * 29 = 116 rows. Basically the result set lists each plant CommonName against each plan type Name.

INNER JOIN

An inner join is generally the most used JOIN whereby we’re going to be looking for all items from one table matching a column to data from another table. So again using our Plants table and Plant Types we might want to see the plant type associated with each plan in our database.

Note: We can create such a join without the INNER JOIN keywords, by default in SQL Server such joins are inner joins anyway

select p.CommonName, pt.Name 
from Plants p inner join PlantTypes pt
on p.Type_Id = pt.Id

So this assumes that the Type_Id is a foreign key into the PlantTypes table and relates to it’s primary key.

FULL OUTER JOIN

A FULL OUTER JOIN will return NULL data. So for example we allowed NULL’s for our Type_Id or there are no matches for a PlantTypes Id then we’ll see NULL values in the output columns, for example the query would look like

select p.CommonName, pt.Name 
from Plants p inner join PlantTypes pt
on p.Type_Id = pt.Id

Our result set may now display CommonName NULL (for example) if a PlantType Id does not
have a matching Plant, i.e. we’ve not added any Trees yet to out Plants table.

Note: The FULL OUTER JOIN syntax is not supported by MySQL

LEFT OUTER JOIN

As we’ve seen with a FULL OUTER JOIN we might have a PlantType which is not yet used in the Plants table and therefore we’ll see a NULL for the column CommonName. However we might also have a plant which doesn’t yet have a Type_Id and hence would have a NULL in this column.

If we don’t want to view plant types which have NULL CommonNames (in other words we only really care about the plants not the types) we may want to see all plants and their plant types regardless of whether they’re NULL, we can use a LEFT OUTER JOIN

select p.CommonName, pt.Name 
from Plants p left outer join PlantTypes pt
on p.Type_Id = pt.Id

In this case we get data from the left table whether or not they have a matching value in the right hand side table.

RIGHT OUTER JOIN

As you’ve guessed a RIGHT OUTER JOIN will return all values from the plants data with a match on the plant type plus those not matching the plan type with no plants associated with it.

select p.CommonName, pt.Name 
from Plants p right outer join PlantTypes pt
on p.Type_Id = pt.Id

SELF JOIN

We’ve seen how to create joins with other tables but actually we can join with the same table. There’s no SELF JOIN keywords, it’s more the concept of joining one table with itself. The most obvious use of such a join is within a hierarchal type of data. Possibly a Plant Type might have a parent Plant Type then one could join against the parent plant type on the same table as the child plant type.

SQL basics (Part 1)

Let’s take a look at some SQL basics.

Note: Unless otherwise stated, the queries are tested in SQL Server only

Select

So to retrieve data from a database using SQL we write

select <columns> from <table>

and optionally we can add a where clause to reduce the result set based upon Boolean logic.

select <columns> from <table> where <boolean logic>

We can view all columns within a table using the wildcard * however in a production environment (at least where an application is retrieving data from the database) we would be better off specifying the columns we want to improve performance and ensure that if new columns are added or the likes, our query still produce the same “expected” result set, column-wise anyway.

So using the wild card we would have something like

select * from Plants

or specifying the required columns we use something like

select CommonName from Plants

or retrieving multiple columns

select CommonName from Plants
select CommonName, Genus from Plants

When specifying columns we might be querying from multiple tables so it’s best to alias the table name in case of column name duplication across different tables, hence we get something like

select p.CommonName from Plants p

Aliasing columns

Using the alias table query (from above) we will get a result set (in SQL Server Management Studio) with the column name CommonName. We can assign an alias to the column during a query and therefore change the name output for the column, for example

select p.CommonName as 'Common Name' from Plants p

this will output a column named Common Name now.

Aliasing can also be used on the output from functions etc. So, for example

select count(*) as Count from Plants p

will output a column named Count.

Count

Count returns a scalar value. We can get the number of rows in a table using the wildcard

select count(*) from Plants

This will return the total number of rows in the Plants table, however using a column name within count returns the number of non-NULL rows, hence

select count(p.CommonName) from Plants p

Min/Max

Both Min and Max returns a single value indicating the minimum (obviously using MIN) or maximum (obviously using MAX) non-NULL value from a column, both can be used on numeric or non-numeric columns.

Here’s an example of the usage

select max(p.Height) from Plants p

The above returns the maximum height found in the Plants table and

select min(p.Height) from Plants p

returns the minimum height.

AVG

AVG returns a single value indicating the average value within a selected column. AVG only works on numeric columns.

select avg(p.Height) from Plants p

SUM

SUM can be used on a numeric column and return the SUM of all values

select sum(p.Height) from Plants p

DISTINCT

The distinct keyword allows us to get only distinct (non-duplicating) values, i.e.

select distinct p.Genus from Plants p

The above will basically remove duplicates.

GROUP BY

We can create sub groups from our data using the GROUP BY clause. For example say we want to duplicate the DISTINCT functionality by only returning all plants with no duplicate common names we can do this with GROUP BY as

select p.CommonName from Plants p group by p.CommonName

Basically we create groups based upon the CommonName and then output each group’s CommonName. But we can think of the result set as groups (or arrays) of data so we can also do things like list a count for the number of duplicated names against each group

select p.CommonName, count(p.CommonName) from Plants p group by p.CommonName

This will now list each distinct group name and list a count alongside it to show how many items have that CommonName.

HAVING

The HAVING clause is used in a similar way to the WHERE clause but for GROUP BY result sets. An example might be where we’ve grouped by the CommonName of a plant in our database but are only interested in those names with more than a certain number of occurrences, thus

select p.CommonName, count(p.CommonName) 
from Plants p 
group by p.CommonName having count(p.CommonName) > 3

Now we’re basically get a result set with the CommonName and the Count for those CommonNames duplicated more than 3 times.

Note: In SQL Server we cannot alias the count(p.CommonName) as c, for example and then use c in the having clause, whereas MySQL does allow this syntax

Database indexes

At some point we may need to look at adding indexes to our tables. Indexes can help with the performance when retrieving rows from our database.

An index allows us to help the SQL query engine to better search for commonly searched for data by giving it “hints” about the searchable columns. An index helps the query engine so that it doesn’t have to search all rows in a database by default but instead can use a more efficient looking up on indexed columns. So for example if we have an Employee database we might have a primary key on an employee number (which will usually be indexed) and we might add an index to the surname column, in the knowledge that this is often uses in queries.

Sounds great, right, but there is a downside to creating indexes and it is that they can have a detrimental affect on performance when inserting data. Because indexes need updating when new rows are added to the database. Hence when we add a new item of data the database engine may need to regenerate indexes each time.

An index is usually represented as a B-Tree structure (a balanced tree), see B-tree for examples of implementations.

Before we get started

The following examples work on SQL Server. In some cases they’ll work exactly “as is” on other databases, in some you may need to tweak things. I’ve simply not tried them out on anything other than SQL Server at this time.

Creating an index

As a naming convention, we will prefix our index name with IX_, so for example

CREATE INDEX IX_TITLE
ON FlashCard (Title)

this creates an index, named IX_TITLE on the table “FlashCard” and the column “Title”. By default we’ll have created a NOCCLUSTERED index. If want to created a CLUSTERED index we need to change the SQL to

CREATE CLUSTERED INDEX IX_TITLE
ON FlashCard (Title)

See CREATE INDEX (Transact-SQL) for the full syntax for creating an index using SQL Server.

According to the post Maximum Capacity Specifications for SQL Server you can create 999 nonclusted indexes per table. However you can only create one clustered index.

If we can create an index we must be able to drop one, so here goes

DROP INDEX IX_TITLE 
ON FlashCard

Viewing indexes

Using a UI tool such as SQL Server Management Studio we simply open the database in the Object Explorer and look at the Indexes section. This lists all the indexes on the table. In SQL we query the sys.indexes table, so for example we can use the following against SQL Server

select * from sys.indexes

or to view our recently added index

select * from sys.indexes where name = 'IX_TITLE'

CLUSTERED, NONCLUSTERED and UNIQUE indexes

In their simplest form, a NONCLUTERED index is an index which may have duplicates whereas (probably obvious) a UNIQUE index may not. UNIQUE indexes have a unique constraint on them whereas a CLUSTERED index is again unique but has the addition that data is ordered on disk to match the index.

By default your primary key will be created as a clustered index. For example this is the result of getting SQL Server Management Studio to create a CREATE script (abridged) for a simple FlashCard database

CREATE TABLE [dbo].[FlashCard](
	[Id] [int] IDENTITY(1,1) NOT NULL,
	[Title] [nchar](20) NOT NULL,
	[Front] [ntext] NULL,
	[Back] [ntext] NULL,
 CONSTRAINT [PK_Card] PRIMARY KEY CLUSTERED 

And ofcourse, as only one clustered index may exist per table, if you attempt to create a clustered index on a table with a primary key set as clustered (in it’s default state) you’ll get the error

Cannot create more than one clustered index on table ‘FlashCard’. Drop the existing clustered index ‘PK_Card’ before creating another.

in which case you’ll need to decide whether your primary key is best suited to be clustered, if not then you’ll need to drop the primary key constraint and then re-add it, for example

ALTER TABLE FlashCard 
DROP CONSTRAINT PK_Card
GO
ALTER TABLE FlashCard 
ADD CONSTRAINT PK_Card PRIMARY KEY NONCLUSTERED (Id)

or ofcourse create the constraint in the first place as nonclustered.

References

For an excellent and far more comprehensive look at indexes, check out What every developer should know about SQL performance.

SQL Server Index Design Guide